Recherche avancée

Médias (1)

Mot : - Tags -/ticket

Autres articles (74)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

Sur d’autres sites (8195)

  • Manual encoding into MPEG-TS

    4 juillet 2014, par Lane

    SO...

    I am trying to take a H264 Annex B byte stream video and encode it into MPEG-TS in pure Java. My goals is to create a minimal MPEG-TS, Single Program, valid stream and to not include any timing information information (PCR, PTS, DTS).

    I am currently at the point where my generated file can be passed to ffmpeg (ffmpeg -i myVideo.ts) and ffmpeg reports...

    [NULL @ 0x7f8103022600] start time is not set in estimate_timings_from_pts

    Input #0, mpegts, from 'video.ts':
    Duration: N/A, bitrate: N/A
    Program 1
     Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc

    ...it seems like this warning for start time is not a big deal... and ffmpeg is unable to determine how long the video is. If I create another mpeg-ts file from my video file (ffmpeg -i myVideo.ts -vcodec copy validVideo.ts) and run ffmpeg -i validVideo.ts I get...

    Input #0, mpegts, from 'video2.ts':
    Duration: 00:00:11.61, start: 1.400000, bitrate: 3325 kb/s
    Program 1
     Metadata:
       service_name    : Service01
       service_provider: FFmpeg
     Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc

    ...so you can see the timing information and bitrate is there and so is the metadata.

    My H264 video is comprised of only I and P Frames (with the SPS and PPS preceding the I Frame of course) and the way that I am creating my MPEG-TS stream is...

    1. Write a single PAT at the beginning of the file
    2. Write a single PMT at the beginning of the file
    3. Create TS and PES packets from SPS, PPS and I Frame (AUD NALs too, if this is required ?)
    4. Create TS and PES packets from P Frame (again, AUD NALs too, if required)
    5. For the last payload of either an I Frame or P Frame, add filler bytes to an adaptation field to make sure it fits into a full TS packet
    6. Repeat 3-5 for the entire file

    ...my PAT looks like this...

    4740 0010 0000 b00d 0001 c100 0000 01f0
    002a b104 b2ff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff

    ...and my PMT looks like this...

    4750 0010
    0002 b012 0001 c100 00ff fff0 001b e100
    f000 c15b 41e0 ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff

    ...notice after the c100 00, the "ff ff", f0... says that we are not using a PCR... Also notice that I have updated my CRC to reflect this change to the PMT. My first I Frame packet looks like...

    4741 0010 0000 01e0
    0000 8000 0000 0000 0109 f000 0000 0127
    4d40 288d 8d60 2802 dd80 b501 0101 4000
    00fa 4000 3a98 3a18 00b7 2000 3380 2ef2
    e343 0016 e400 0670 05de 5c16 345d c000
    0000 0128 ee3c 8000 0000 0165 8880 0020
    0000 4fe5 63b5 4e90 b11c 9f8f f891 10f3
    13b1 666b 9fc6 03e9 e321 36bf 1788 347b
    eb23 fc89 5772 6e2e 1714 96df ed16 9b30
    252d ceb7 07e9 a0c7 c6e7 9515 be87 2df1
    81f3 b9d2 ba5f 243e 2d5c cba2 8ca5 b798
    6bec 8c43 0b5d bbda bc5b 6e7c e15c 84e8
    2f13 be84

    ...you’ll notice after the 01e0 0000, 8000 00 is the PES header extension where I specify no PTS / DTS and the remaining length is zero. My first P Frame packet looks like...

    4741 001d
    0000 01e0 0000 8000 0000 0000 0109 f000
    0000 0141 9a00 0200 0593 ff45 a7ae 1acd
    f2d7 f9ec 557f cdb6 ba38 60d6 a626 5edb
    4bb9 9783 89e2 d7e1 102e 4625 2fbf ce16
    f952 d8c9 f027 e55a 6b2a 81c3 48d4 6a45
    050a f355 fbec db01 6562 6405 04aa e011
    50ec 0b45 45e5 0df7 2fed a3f8 ac13 2e69
    6739 6d81 f13d 2455 e6ca 1c6b dc96 65d5
    3bad f250 7dab 42e4 7ba9 f564 ee61 29fb
    1b2c 974c 6924 1a1f 99ef 063c b99a c507
    8c22 b0f8 b14c 3e4d 01d0 6120 4e19 8725
    2fda 6550 f907 3f87

    ...and whenever an I Frame or P Frame is ending, I have a TS packet with an adaptation field like...

    4701 003c b000 ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff

    ...where the first b0 bytes are the adaptation field stuffing bytes and the remaining ones are the final bytes of the I or P Frame. So as you can tell I can use ffmpeg and pass it my file to create a valid movie in any format. However, I need the file I create to be in the proper format and I cannot quite figure out what the last piece I am missing is. Any ideas ?

  • ffmpeg 4 : Using the stream_loop parameter to loop the audio during a video ends up with an infinite loop

    17 juin 2020, par JarsOfJam-Scheduler

    Summary

    



      

    1. Context
    2. 


    3. The software I use
    4. 


    5. The problem
    6. 


    7. Results
      
4.1. Actual Results

      



      4.2. Expected Results

    8. 


    9. What did I try to fix the bug ?

    10. 


    11. How to reproduce this bug : minimal and testable example with the provided required data

    12. 


    13. The question

    14. 


    15. Sources

    16. 


    




    



    Context

    



    I would want to set an audio WAV as the background sound of a video WEBM. The video can be shorter or longer than the audio. At the moment I add the audio over the video, I don't know the length of both streams. The audio must repeat until the video ends (the audio can be truncated if the video ends before the end of the last repetition of the audio).

    



    The software I use

    



    I use ffmpeg version 4.2.2-1ubuntu1 18.04.sav0.

    



    The problem

    



    ffmpeg seems to enter in an infinite loop when it proccesses in order to mix the audio and the video. Also, the length of the currently-generating-output-file (which contains both video and audio) is equal to the length of the audio, instead of the length of the video.

    



    The problem seems to be triggered by this command line :

    



    ffmpeg -i directory_1/video.webm -stream_loop -1 -fflags +shortest -max_interleave_delta 50000 -i directory_2/audio.wav directory_3/video_and_audio.webm


    



    Results

    



    Actual Results

    



    Three things :

    



      

    1. The infinite loop of the ffmpeg process : I must manually stop the ffmpeg process

    2. 


    3. The output video file with music (which is currently generating but output anyway) : it contains both audio and video. But the length of the output file is equal to the length of the audio, instead of the length of the video.

    4. 


    5. The following output logs :

    6. 


    



    


    ffmpeg version 4.2.2-1ubuntu1 18.04.sav0 Copyright (c) 2000-2019 the
 FFmpeg developers built with gcc 7 (Ubuntu 7.5.0-3ubuntu1 18.04)
    
 configuration : —prefix=/usr —extra-version='1ubuntu1 18.04.sav0'
 —toolchain=hardened —libdir=/usr/lib/x86_64-linux-gnu —incdir=/usr/include/x86_64-linux-gnu —arch=amd64 —enable-gpl —disable-stripping —enable-avresample —disable-filter=resample —enable-avisynth —enable-gnutls —enable-ladspa —enable-libaom —enable-libass —enable-libbluray —enable-libbs2b —enable-libcaca —enable-libcdio —enable-libcodec2 —enable-libflite —enable-libfontconfig —enable-libfreetype —enable-libfribidi —enable-libgme —enable-libgsm —enable-libjack —enable-libmp3lame —enable-libmysofa —enable-libopenjpeg —enable-libopenmpt —enable-libopus —enable-libpulse —enable-librsvg —enable-librubberband —enable-libshine —enable-libsnappy —enable-libsoxr —enable-libspeex —enable-libssh —enable-libtheora —enable-libtwolame —enable-libvidstab —enable-libvorbis —enable-libvpx —enable-libwavpack —enable-libwebp —enable-libx265 —enable-libxml2 —enable-libxvid —enable-libzmq —enable-libzvbi —enable-lv2 —enable-omx —enable-openal —enable-opencl —enable-opengl —enable-sdl2 —enable-libdc1394 —enable-libdrm —enable-libiec61883 —enable-nvenc —enable-chromaprint —enable-frei0r —enable-libx264 —enable-shared libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 /
 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libavresample 4. 0. 0 / 4. 0. 0 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 /
 55. 5.100 Input #0, matroska,webm, from 'youtubed/my_youtube_video.webm' : Metadata :
 encoder : Chrome Duration : N/A, start : 0.000000, bitrate : N/A
 Stream #0:0(eng) : Video : vp8, yuv420p(progressive), 3200x1608, SAR 1:1 DAR 400:201, 1k tbr, 1k tbn, 1k tbc (default)
 Metadata :
 alpha_mode : 1 Guessed Channel Layout for Input Stream #1.0 : stereo Input #1, wav, from 'tmp_music/original_music.wav' :
    
 Duration : 00:00:11.78, bitrate : 1411 kb/s
 Stream #1:0 : Audio : pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s Stream mapping : Stream #0:0 -> #0:0 (vp8
 (native) -> vp9 (libvpx-vp9)) Stream #1:0 -> #0:1 (pcm_s16le
 (native) -> opus (libopus)) Press [q] to stop, [?] for help
 [libvpx-vp9 @ 0x5645268aed80] v1.8.2 [libopus @ 0x5645268b09c0] No bit
 rate set. Defaulting to 96000 bps. Output #0, webm, to
 'youtubed/my_youtube_video_with_music.webm' : Metadata :
 encoder : Lavf58.29.100
 Stream #0:0(eng) : Video : vp9 (libvpx-vp9), yuv420p(progressive), 3200x1608 [SAR 1:1 DAR 400:201], q=-1—1, 200 kb/s, 1k fps, 1k tbn, 1k
 tbc (default)
 Metadata :
 alpha_mode : 1
 encoder : Lavc58.54.100 libvpx-vp9
 Side data :
 cpb : bitrate max/min/avg : 0/0/0 buffer size : 0 vbv_delay : -1
 Stream #0:1 : Audio : opus (libopus), 48000 Hz, stereo, s16, 96 kb/s
 Metadata :
 encoder : Lavc58.54.100 libopus

    


    



    Expected Results

    



      

    1. No infinite loop during the ffmpeg process

    2. 


    3. Concerning the output logs, I don't know what it should look.

    4. 


    5. The output file with the audio and the video should :

      



      3.1. If the video is longer than the audio, then the audio is repeated until it exactly fits the video. The audio can be truncated.

      



      3.2. If the video is shorter than the audio, then the audio is truncated and exactly fits the video.

      



      3.3. If both video and audio are of the same length, then the audio exactly fits the video.

    6. 


    



    How to reproduce this bug ? (+ required data)

    



      

    1. Download the following files (resp. audio and video) (I must refresh these download links every 24 hours) :

      



      1.1. https://a.uguu.se/dmgsmItjJMDq_audio.wav

      



      1.2. https://a.uguu.se/w3qHDlGq6mOW_video.webm

    2. 


    3. Move them into the directory/directories of your choice.

    4. 


    5. Open your CLI, move to the adequat directory and copy/paste/execute the instruction given in Part. The Problem (don't forget to eventually modify this instruction by indicating the adequat directories, according to step 2.).

    6. 


    7. You'll face my problem.

    8. 


    



    What did I try to fix the bug ?

    



    Nothing, since I don't even understand why the bug occures.

    



    The question

    



    How to correct my command in order to mix these audio and video streams without any infinite loop during the ffmpeg process, keeping in mind that I don't know their length, and that audio must be repeated in order to fit the video, even if audio must be truncated (in the case of the last repetition of the audio file must be truncated because the video stream has just ended) ?

    



    Sources

    



    The source is the command line you can find in Part. The problem.

    


  • Salty Game Music

    31 mai 2011, par Multimedia Mike — General

    Have you heard of Google’s Native Client (NaCl) project ? Probably not. Basically, it allows native code modules to run inside a browser (where ‘browser’ is defined pretty narrowly as ‘Google Chrome’ in this case). Programs are sandboxed so they aren’t a security menace (or so the whitepapers claim) but are allowed to access a variety of APIs including video and audio. The latter API is significant because sound tends to be forgotten in all the hullabaloo surrounding non-Flash web technologies. At any rate, enjoy NaCl while you can because I suspect it won’t be around much longer.

    After my recent work upgrading some old music synthesis programs to user more modern audio APIs, I got the idea to try porting the same code to run under NaCl in Chrome (first Nosefart, then Game Music Emu/GME). In this exercise, I met with very limited success. This blog post documents some of the pitfalls in my excursion.



    Infrastructure
    People who know me know that I’m rather partial — to put it gently — to straight-up C vs. C++. The NaCl SDK is heavily skewed towards C++. However, it does provide a Python tool called init_project.py which can create the skeleton of a project and can do so in C with the '-c' option :

    ./init_project.py -c -n saltynosefart
    

    This generates something that can be built using a simple ‘make’. When I added Nosefart’s C files, I learned that the project Makefile has places for project-necessary CFLAGS but does not honor them. The problem is that the generated Makefile includes a broader system Makefile that overrides the CFLAGS in the project Makefile. Going into the system Makefile and changing "CFLAGS =" -> "CFLAGS +=" solves this problem.

    Still, maybe I’m the first person to attempt building something in Native Client so I’m the first person to notice this ?

    Basic Playback
    At least the process to create an audio-enabled NaCl app is well-documented. Too bad it doesn’t seem to compile as advertised. According to my notes on the matter, I filled in PPP_InitializeModule() with the appropriate boilerplate as outlined in the docs but got a linker error concerning get_browser_interface().

    Plan B : C++
    Obviously, the straight C stuff is very much a second-class citizen in this NaCl setup. Fortunately, there is already that fully functional tone generator example program in the limited samples suite. Plan B is to copy that project and edit it until it accepts Nosefart/GME audio instead of a sine wave.

    The build system assumes all C++ files should have .cc extensions. I have to make some fixes so that it will accept .cpp files (either that, or rename all .cpp to .cc, but that’s not very clean).

    Making Noise
    You’ll be happy to know that I did successfully swap out the tone generator for either Nosefart or GME. Nosefart has a slightly fickle API that requires revving the emulator frame by frame and generating a certain number of audio samples. GME’s API is much easier to work with in this situation — just tell it how many samples it needs to generate and give it a pointer to a buffer. I played NES and SNES music play through this ad-hoc browser plugin, and I’m confident all the other supported formats would have worked if I went through the bother of converting the music data files into C headers to be included in the NaCl executable binaries (dynamically loading data via the network promised to be a far more challenging prospect reserved for phase 3 of the project).

    Portable ?
    I wouldn’t say so. I developed it on Linux and things ran fine there. I tried to run the same binaries on the Windows version of Chrome to no avail. It looks like it wasn’t even loading the .nexe files (NaCl executables).

    Thinking About The (Lack Of A) Future
    As I was working on this project, I noticed that the online NaCl documentation materialized explicit banners warning that my NaCl binaries compiled for Chrome 11 won’t work for Chrome 12 and that I need to code to the newly-released 0.3 SDK version. Not a fuzzy feeling. I also don’t feel good that I’m working from examples using bleeding edge APIs that feature deprecation as part of their naming convention, e.g., pp::deprecated::ScriptableObject().

    Ever-changing API + minimal API documentation + API that only works in one browser brand + requiring end user to explicitly enable feature = … well, that’s why I didn’t bother to release any showcase pertaining to this little experiment. Would have been neat, but I strongly suspect that this is yet another one of those APIs that Google decides to deprecate soon.

    See Also :