Recherche avancée

Médias (0)

Mot : - Tags -/logo

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (55)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

  • Submit enhancements and plugins

    13 avril 2011

    If you have developed a new extension to add one or more useful features to MediaSPIP, let us know and its integration into the core MedisSPIP functionality will be considered.
    You can use the development discussion list to request for help with creating a plugin. As MediaSPIP is based on SPIP - or you can use the SPIP discussion list SPIP-Zone.

Sur d’autres sites (9169)

  • ffmpeg Get time of frames from trimmed video

    17 novembre 2017, par TheOtherguyz4kj

    I am using FFmpeg in my application to extract frames from a video, the frames will be added to a trim video view where you get an illustration as to what is happening in the video at a specific time within the video. So each frame needs to represent some time within the video.

    I dont quite understand how FFmpeg is producing the frames. Here is my code :

    "-i",
    videoCroppedFile.getAbsolutePath(),
    "-vf",
    "fps=1/" + frameSeperation,
    mediaStorageDir.getAbsolutePath() +
    "/%d.jpg"

    My app allows you to record a video at a max length of 20s. The number of frames extracted from the video depnds on how long the captured video is. frameSeperation is calculated doing the below code.

    String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
       long videoLength = Long.parseLong(time) / 1000;
       double frameSeperationDouble = (double) videoLength;
       // Divide by 11 because there is a maximum of 11 frames on trim video view
       frameSeperationDouble /= 11;
       frameSeperationDouble = Math.ceil(frameSeperationDouble);
       int frameSeperation = (int) frameSeperationDouble;

    Maybe the above logic is very bad, if there is a better way please can somebody tell me.

    Anyway I run the code and below are a few test cases :

    • A video captured with a length of 6 seconds has 7 frames.
    • A video captured with a length of 2 seconds has 3 frames.
    • A video captured with a length of 10 seconds has 12 frames.
    • A video captured with a length of 15 seconds has 9 frames.
    • A video captured with a length of 20 seconds has 11 frames.

    There is no consistency, and I find it hard to put timestamps against each frame because of this. I feel like my logic is wrong or im not understanding. Any help is much appreciated

    Update 1

    So I did what you said in comments :

    final FFmpeg ffmpeg = FFmpeg.getInstance(mContext);
           final File mediaStorageDir = new File(Environment.getExternalStorageDirectory()
                   + "/Android/data/"
                   + mContext.getPackageName()
                   + "/vFrames");

       if (!mediaStorageDir.exists()){
           mediaStorageDir.mkdirs();
       }

       MediaMetadataRetriever retriever = new MediaMetadataRetriever();
       retriever.setDataSource(mContext, Uri.fromFile(videoCroppedFile));
       String time = retriever.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
       long videoLength = Long.parseLong(time) / 1000;
       double frameSeperationDouble = (double) videoLength / 8;

       retriever.release();

       final String cmd[] = {

               "-i",
               videoCroppedFile.getAbsolutePath(),
               "-vf",
               "fps=1/" + frameSeperationDouble,
               "-vframes," + 8,
               mediaStorageDir.getAbsolutePath() +
               "/%d.jpg"
       };

    I also tried "-vframes=" + 8 at the same point where I put vFrames in cmd. It doesnt seem to work at all now no frames are being extracted from the video

  • What kind of library can I use to edit video in real time ? [closed]

    14 avril 2020, par DepressingUtopian

    I am developing a video editor on Android (Java). For video encoding, I use a third-party library built on FFMPEG.

    



    https://github.com/tanersener/mobile-ffmpeg

    



    The problem is that everything that happens in FFMPEG is saved to disk ... For example, you cannot apply a filter to real-time video, you have to wait until the result of applying the filter is reset to disk and counted and displayed on VideoView. Tell the library with which you can apply a filter to a piece of video and see it in real time. Or for example, change the contrast of the video and see it on the VideoView.

    



    Perhaps there is a way to redirect the output stream of encoded video using FFMPEG directly to VideoView ?

    


  • VP8 for Real-time Video Applications

    15 février 2011, par noreply@blogger.com (John Luther)

    With the growing interest in videoconferencing on the web platform, it’s a good time to explore the features of VP8 that make it an exceptionally good codec for real-time applications like videoconferencing.

    VP8 Design History & Features

    Real-time applications were a primary use case when VP8 was designed. The VP8 encoder has features specifically engineered to overcome the challenges inherent in compressing and transmitting real-time video data.

    • Processor-adaptive encoding. 16 encoder complexity levels automatically (or manually) adjust encoder features such as motion search strategy, quantizer optimizations, and loop filtering strength.
    • Encoder can be configured to use a target percentage of the host CPU.
      Ability to measure the time taken to encode each frame and adjust encoder complexity dynamically to keep the encoding time per frame constant
    • Robust error recovery (packet retransmission, forward error correction, recovery frame/new keyframe requests)
    • Temporal scalability (i.e., a single video bitstream that can degrade as needed depending on a participant’s available bandwidth)
    • Highly efficient decoding performance on low-power devices. Conventional video technology has grown to a state of complexity where dedicated hardware chips are needed to make it work well. With VP8, software-based solutions have proven to meet customer needs without requiring specialized hardware.

    For a more information about real-time video features in VP8, see the slide presentation by WebM Project engineer Paul Wilkins (PDF file).

    Commercially Available Products

    Millions of people around the world have been using VP7/8 for video chat for years. VP8 is deployed in some of today’s most popular consumer videoconferencing applications, including Skype (group video calling), Sightspeed, ooVoo and Logitech Vid. All of these vendors are active WebM project supporters. VP8’s predecessor, VP7, has been used in Skype video calling since 2005 and is supported in the new Skype app for iPhone. Other real-time VP8 implementations are coming soon, including ooVoo, and VP8 will play a leading role in Google’s plans for real-time applications on the web platform.

    Real-time applications will be extremely important as the web platform matures. The WebM community has made significant improvements in VP8 for real-time use cases since our launch and will continue to do so in the future.

    John Luther is Product Manager of the WebM Project.