Recherche avancée

Médias (0)

Mot : - Tags -/utilisateurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (70)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (9352)

  • can't livestream on facebook as 720p when using API

    1er août 2018, par boygiandi

    I just realized that when we using facebook api to create a livestream ( /me/live_videos ) , we only get 360p as output max even we sent 1080p. But, if we use Facebook dialog, we can get to 720p, the same video input.

    FB.ui({
       display: 'popup',
       method: 'live_broadcast',
       phase: 'create',
    }, function(response) {
       FB.ui({
         display: 'popup',
         method: 'live_broadcast',
         phase: 'publish',
         broadcast_data: response
       }, function(response) {
       });
    });

    https://developers.facebook.com/docs/graph-api/reference/live-video/

    Does anyone know how to get 720p when we use Facebook graph API ?

  • is this ffmpeg command optimized ?

    22 juin 2017, par Bob Ramsey

    I have a requirement to take a video, add some plain text, and then add some rotated text at different times, locations, and durations. I want to use processor power in the most efficient way this will run 20,000 times (yes, really, we’re personalizing a video for students at a U.)This is what I finally came up with :

    ffmpeg -y -i INPUT.mp4 -filter_complex
     "drawtext=enable='between(t,14,16)':fontfile=tahoma.ttf:fontsize=54:fontcolor=green:x=10:y=text_h + 10:text='Dana Scully',
      drawtext=enable='between(t,19,23)':fontfile=tahoma.ttf:fontsize=16:fontcolor=red:x=150:y=220:text='Dana Scully  \<dana.scully\@fbi.gov\>',
      drawtext=enable='between(t,99,104)':fontfile=tahoma.ttf:fontsize=28:fontcolor=green:x=480:y=text_h + 160:text='Dana Scully',
      drawtext=enable='between(t,14,16)':fontfile=tahoma.ttf:fontsize=16:fontcolor=yellow:x=40:y=25:text='Dana Scully  \<dana.scully\@fbi.gov\>',
      drawtext=enable='between(t,180,186)':fontfile=tahoma.ttf:fontsize=88:fontcolor=green:x=20:y=430:text='Dana Scully'[text];
      color=c=#111111:s=1280x720:d=1,format=yuv444p[colorbk];
      [colorbk]drawtext=fontfile=tahoma.ttf:fontsize=16:fontcolor=purple:x=(w-text_w)/2:y=(h-text_h)/2:text='by',drawtext=fontfile=tahoma.ttf:fontsize=32:fontcolor=green:x=(w-text_w)/2:y=((h-text_h)/2)+50:text='Dana Scully',rotate=(-.5):ow=1280:oh=720:c=#111111,chromakey=#111111:similarity=0.01,format=yuva444p,colorkey=#111111:0.1[rotated];
      [text][rotated]overlay=eval=frame:x='if(gte(t,134),(if(lte(t,137),20,NAN)), NAN)':y=100[out];[out]scale=iw*.25:-1"
      -crf 20 test.mp4

    Is that about as optimized as it is going to get ? I thought ffmpeg would already handle the threads based on the computer’s processor, so no real need to mess with it. The processing will all be done on AWS VMs.

    Rotating the text is what really slows it down.

    Any ideas ?

  • FFmpeg what exactly is the filtergraph pipeline like during transcoding ?

    8 septembre 2017, par Jeff Gong

    I have been studying the source code for FFmpeg to attempt to understand its threading model and how it processes inputs. For example, when I run a command like :

    ffmpeg -i video.mp4 -s hd720 -c:v libx264 --preset medium -c:a aac -profile:v main -r 60 -f null /dev/null

    The input itself is irrelevant, but I am trying to understand how the transcoding pipeline works. In the source code, I see that the main steps occur in the functions transcode and transcode_step.

    It seems like for a single input, a single frame is read in, decoded, encoded, and written out. The process is obviously very complex but what I am really not understanding is what FFmpeg is doing when it attempts to build out a filtergraph. For example, in transcode_step of ffmpeg.c, there is the following code that happens right after an output stream has been selected :

    if (ost->filter && !ost->filter->graph->graph) {
       if (ifilter_has_all_input_formats(ost->filter->graph)) {
           ret = configure_filtergraph(ost->filter->graph);
           if (ret < 0) {
               av_log(NULL, AV_LOG_ERROR, "Error reinitializing filters!\n");
               return ret;
           }
       }
    }

    Does this only apply if I specify a specific series of filtering options to FFmpeg, like the one in this link ? For the sample command I input above, is this code still executed ?

    One last other question I had was for the case where I run an FFmpeg instance with a single input but multiple outputs (perhaps different variants for transcoding). In this scenario, does a single phase of transcode_step take in an input frame and send that frame through decoding and encoding for only a single one of the outputs ? Or does it take a frame at a time and process this frame for each of the outputs we have specified ?