Recherche avancée

Médias (1)

Mot : - Tags -/iphone

Autres articles (46)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Librairies et binaires spécifiques au traitement vidéo et sonore

    31 janvier 2010, par

    Les logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
    Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
    Binaires complémentaires et facultatifs flvtool2 : (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (8223)

  • How to concatenate two MP4 files, which require http basic Authorization : Bearer , using ffmpeg ?

    8 juillet 2023, par Jeff Strongman

    Hello dear ffmpeg experts ! 🧠 🎯

    


    I ran the following command, which worked perfectly :

    


    ffmpeg -protocol_whitelist https,concat,tls,tcp -i "concat :https://dash.akamaized.net/akamai/bbb_30fps/bbb_30fps_1280x720_4000k/bbb_30fps_1280x720_4000k_0.m4v|https://dash.akamaized.net/akamai/bbb_30fps/bbb_30fps_1280x720_4000k/bbb_30fps_1280x720_4000k_1.m4v|https://dash.akamaized.net/akamai/bbb_30fps/bbb_30fps_1280x720_4000k/bbb_30fps_1280x720_4000k_2.m4v" -c:v copy -vframes 180 -y Movie_of_6_seconds.mp4

    


    I followed the recommended solution of the following post :
How to concatenate two MP4 files using FFmpeg ?

    


    You can execute the command on your local computer and see that it should run just fine...

    


    I used 3. concat protocol, which does indeed concat init + progressive segments

    


    However... when every segment on a server I refer to, is password protected, it fails with 401 Unauthorized, even though I added the following line :
-headers "Authorization : Bearer bas64user:password" , before specifying the -i "concat :...".

    


    It seems to me... that the headers don't pass to the concat command inside of the input of ffmpeg... and it simply ignores them. When I used the same -headers command, on a single file, without concat, it passed the authorization successfully

    


    Notes :

    


      

    • Even though every segment has a length of 120 frames (So in maximum, I could have generated 2*120 = 240 frames... I wanted a movie of 6 seconds and not 8... And by this way, to test that ffmpeg is smart enough to stop processing the whole input). To do that, I used -vframes 180, where 180 / 30 (FPS) = 6 seconds
    • 


    • I used -c:v copy, to get without re-encoding, only the video part (No audio !)
    • 


    • I used -y to override existing file...
    • 


    • 0.m4v, is the init file ! it is a small file, that has metadata of the original video which was produced with mpeg-dash
    • 


    • 1.m4v and 2.m4v, are the progressive segments
    • 


    


    Is there a way, to pass the http basic headers (Authorization : Bearer) to all of the chained files ?

    


    Like :

    


      

    • Via a json content type on the ffmpeg request
    • 


    • Or user:password@video_segment (Although... it seems to me it's not a header ?)
    • 


    • Somehow specify header inside the concat command ?
    • 


    


    I don't want to first download all files and then get rid of the password protected... as it both takes ridiculous time & other resources... and I would like to record from a segment that is "endless", meaning a camera that keeps streaming data.

    


    Thanks in advance 🙏🏻,

    


    FFmpeg noobie 🙈

    


  • Is there an efficient way to retrieve frames from a video in Android ?

    28 mars 2015, par Naveed

    I have an app which requires me to retrieve frames from a video and do some processing with them. However it seems like that the frame retrieval is very slow to the point where it is unacceptable. Sometimes it is taking upto 2.5 second to retrieve a single frame. I am using the MediaMetadataRetriever as most stackoverflow questions suggested. However the performance is very bad. Here is what I have :

      private List<bitmap> retrieveFrames() {

           MediaMetadataRetriever fmmr = new MediaMetadataRetriever();
           fmmr.setDataSource("/path/to/some/video.mp4");
           String strLength = fmmr.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
           long milliSecs = Long.parseLong(strLength);
           long microSecLength = milliSecs * 1000;

           Log.d("TAG", "length: " + microSecLength);
           long one_sec = 1000000; // one sec in micro seconds

           ArrayList<bitmap> frames = new ArrayList&lt;>();
           int j = 0;
           for (int i = 0; i &lt; microSecLength; i += (one_sec / 5)) {
               long time = System.currentTimeMillis();
               Bitmap frame = fmmr.getFrameAtTime(i, MediaMetadataRetriever.OPTION_CLOSEST);
               j++;
               Log.d("TAG", "Frame number: " + j + " Time taken: " + (System.currentTimeMillis() - time));
               // commented out because each frame would be written to disk instead of holding them in memory
               //  frames.add(frame);
           }
           fmmr.release();
           return frames;
       }
    </bitmap></bitmap>

    The above will logs :

    03-26 21:49:29.781  13213-13239/com.example.naveed.myapplication D/TAG﹕ length: 4949000
    03-26 21:49:30.187  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 1 Time taken: 406
    03-26 21:49:30.779  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 2 Time taken: 592
    03-26 21:49:31.578  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 3 Time taken: 799
    03-26 21:49:32.632  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 4 Time taken: 1054
    03-26 21:49:33.895  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 5 Time taken: 1262
    03-26 21:49:35.382  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 6 Time taken: 1486
    03-26 21:49:37.128  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 7 Time taken: 1746
    03-26 21:49:39.077  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 8 Time taken: 1948
    03-26 21:49:41.287  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 9 Time taken: 2210
    03-26 21:49:43.717  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 10 Time taken: 2429
    03-26 21:49:44.093  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 11 Time taken: 376
    03-26 21:49:44.707  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 12 Time taken: 614
    03-26 21:49:45.539  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 13 Time taken: 831
    03-26 21:49:46.597  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 14 Time taken: 1057
    03-26 21:49:47.875  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 15 Time taken: 1278
    03-26 21:49:49.384  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 16 Time taken: 1508
    03-26 21:49:51.112  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 17 Time taken: 1728
    03-26 21:49:53.096  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 18 Time taken: 1983
    03-26 21:49:55.315  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 19 Time taken: 2218
    03-26 21:49:57.711  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 20 Time taken: 2396
    03-26 21:49:58.065  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 21 Time taken: 354
    03-26 21:49:58.640  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 22 Time taken: 574
    03-26 21:49:59.369  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 23 Time taken: 728
    03-26 21:50:00.112  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 24 Time taken: 742
    03-26 21:50:00.834  13213-13239/com.example.naveed.myapplication D/TAG﹕ Frame number: 25 Time taken: 721

    As you can see from above, it is taking about 18 - 25 sec to retrieve 25 frames from a 4 sec long video.

    I have also tried this which uses FFmpeg underneath to do the same. I am not sure how well this library is implemented but it only improves the over all performance by a couple of seconds meaning it takes about 15-20 sec to do the same.

    So my question is : is there a way to do it quicker ? My friend has an iOS app where he does something similar but it only takes couple of seconds and he is grabbing even more frames however he is not sure how to do it on android.

    Is there anything on android that would speed up the process. Am I approaching this wrong ?

    The end goal is to stitch those frames together into a gif.

  • Split h.264 stream into multiple parts in python

    31 janvier 2023, par BillPlayz

    My objective is to split an h.264 stream into multiple parts, meaning while reading the stream from a pipe i would like to save it into x second long packages (in my case 10).

    &#xA;

    I am using a libcamera-vid subprocess on my Raspberry Pi that outputs the h.264 stream into stdout.
    &#xA;Might be irrelevant, depends : libcamera-vid outputs a message every frame and I am able to locate it at isFrameStopLine&#xA;To convert the stream, I use an ffmpeg subprocess, as you can see in the code below.

    &#xA;

    Imagine it like that :
    &#xA;Stream is running...
    &#xA;- Start recording to a file
    &#xA;- Sleep x seconds
    &#xA;- Finish recording to file
    &#xA;- Start recording a new file
    &#xA;- Sleep x seconds
    &#xA;- Finish recording the new file
    &#xA;- and so on...

    &#xA;

    Here is my current code, however upon running the first export succeeds, and after the second or third the ffmpeg-subprocess is terminating with the error :
    &#xA;pipe:: Invalid data found when processing input
    &#xA;And shortly after, the python process, because of the ffmpeg termination i believe.&#xA;Traceback (most recent call last):   File "/home/survpi-camera/main.py", line 56, in <module>     processStreamLine(readData)   File "/home/survpi-camera/main.py", line 16, in processStreamLine     streamInfo["process"].stdin.write(data) BrokenPipeError: [Errno 32] Broken pipe</module>

    &#xA;

    recentStreamProcesses = []&#xA;streamInfo = {&#xA;    "lastStreamStart": -1,&#xA;    "process": None&#xA;}&#xA;&#xA;def processStreamLine(data):&#xA;    isInfoLine = ((data.startswith(b"[") and (b"INFO" in data)) or (data == b"Preview window unavailable"))&#xA;    isFrameStopLine = (data.startswith(b"#") and (b" fps) exp" in data))&#xA;    if ((not isInfoLine) and (not isFrameStopLine)):&#xA;        streamInfo["process"].stdin.write(data)&#xA;    &#xA;    if (isFrameStopLine):&#xA;        if (time.time() - streamInfo["lastStreamStart"] >= 10):&#xA;            print("10 seconds passed, exporting...")&#xA;            exportStream()&#xA;            createNewStream()&#xA;&#xA;def createNewStream():&#xA;    streamInfo["lastStreamStart"] = time.time()&#xA;    streamInfo["process"] = subprocess.Popen([&#xA;        "ffmpeg",&#xA;        "-r","30",&#xA;        "-i","-",&#xA;        "-c","copy",("/home/survpi-camera/" &#x2B; str(round(time.time())) &#x2B; ".mp4")&#xA;    ],stdin=subprocess.PIPE,stderr=subprocess.STDOUT)&#xA;    print("Created new streamProcess.")&#xA;&#xA;def exportStream():&#xA;    print("Exporting...")&#xA;    streamInfo["process"].stdin.close()&#xA;    recentStreamProcesses.append(streamInfo["process"])&#xA;&#xA;&#xA;cameraProcess = subprocess.Popen([&#xA;    "libcamera-vid",&#xA;    "-t","0",&#xA;    "--width","1920",&#xA;    "--height","1080",&#xA;    "--codec","h264",&#xA;    "--inline",&#xA;    "--listen",&#xA;    "-o","-"&#xA;],stdout=subprocess.PIPE,stdin=subprocess.PIPE,stderr=subprocess.STDOUT)&#xA;&#xA;createNewStream()&#xA;&#xA;while True:&#xA;    readData = cameraProcess.stdout.readline()&#xA;    &#xA;    processStreamLine(readData)&#xA;

    &#xA;

    Thank you in advance !

    &#xA;