Recherche avancée

Médias (91)

Autres articles (105)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8618)

  • Android-How to pass back frames from FFmpeg back to Android

    23 octobre 2013, par yarin

    It is an architecture question-i am really interesting about the answer

    I building an app with following goals :

    1.record video with effect in real time(using FFmpeg)

    2.display the customized video in real time for the user while he recording

    So,after 1 month of working...i decide to remember that goal number 2 is worth to thinking about :)
    I have a ready skeleton app that record video with effect in real time.
    but i have to preview this customized frame back to the user.

    My options (and this is my question) :

    1.Each frame that pass from onPreviewFrame(byte[] video_frame_data, Camera camera) to ffmpeg with JNI to encode-will sending back to android through the same JNI after i apply the effects(i mean : onPreviewFrame->JNI to FFMPEG->immediately apply effect->send the costumed frame back to android side for display->encode the costumed frame).

    Advantages : it is look like is the most easy to use.

    Disadvantages : use the JNI twice or the passing back the frame could consume time(i really don't now if it really big price to pay,cuz it is only byte array or int array per frame to send to android side)

    2.I heard about openGL on ndk,but i think that the surface it self created on the android side-so is it really going to be better ?
    i prefer to use other surface that i using now in java

    3.Create an video player on FFmpeg to preview each customized frame in real time.

    Thank for your helping,i hope that the first solution is available and not consume to much expensive time in terms of real time processing

  • Merge Conference Video and Audio call output using hstack ffmpeg

    3 janvier 2020, par venkat

    having two videos and two audio files

    Input #0, matroska,webm, from 'first.mkv':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2017-10-16 14:13:15
     Duration: 00:06:01.24, start: 3.817000, bitrate: 1547 kb/s
       Stream #0:0(eng): Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 16.75 tbr, 1k tbn, 1k tbc (default)
       Metadata:
         title           : Video
    Input #1, matroska,webm, from 'second.mkv':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2017-10-16 14:13:24
     Duration: 00:05:49.79, start: 13.509000, bitrate: 810 kb/s
       Stream #1:0(eng): Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 1k tbr, 1k tbn, 1k tbc (default)
       Metadata:
         title           : Video
    Input #2, matroska,webm, from 'first.mka':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2017-10-16 14:13:15
     Duration: 00:06:01.30, start: 3.786000, bitrate: 46 kb/s
       Stream #3:0(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
       Metadata:
         title           : Audio
    Input #3, matroska,webm, from 'second.mka':
     Metadata:
       encoder         : GStreamer matroskamux version 1.8.1.1
       creation_time   : 2017-10-16 14:13:24
     Duration: 00:05:50.61, start: 13.498000, bitrate: 50 kb/s
       Stream #2:0(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
       Metadata:
         title           : Audio

    above files are output of video conference call, want to merge all files together and show as side by side video.

    start time of video and audio are different, want to sync the video and audio respectively and merge the video side by side.

    Initially used the following command to merge two videos

    ffmpeg -i first.mkv -i second.mkv -filter_complex "
    [0:v]scale=320:240,pad=325:240,setsar=1[l];[1:v]scale=320:240,setsar=1[r];
    [l][r]hstack" -c:v libx264 -preset ultrafast -crf 0 merged.mp4

    After that use the following command to merge as suggested by @mulvya

    ffmpeg -ss 00:00:09.692 -i first.mkv -i second.mkv -i first.mka -i second.mka -filter_complex "[0:v]scale=320:240,pad=325:240,setsar=1[l];[1:v]scale=320:240,setsar=1[r];[l][r]hstack=shortest=1[v];[3]adelay=9712|9712[3a];[2][3a]amerge[a]" -map '[v]' -map '[a]' -c:v libx264 -preset slower -crf 0 -c:a aac -ac 2 merged.mp4

    for the -ss value taken the difference in video start time and adelay value taken the difference in audio start time

    Sample test conference files

    1. https://drive.google.com/open?id=0ByVMq5U43FGlbXpXR3JtSnFTaWM

    2. https://drive.google.com/open?id=0ByVMq5U43FGlbENVRWlTWktQb3M

    3. https://drive.google.com/open?id=0ByVMq5U43FGldndlZDNpNWxWY2M

    4. https://drive.google.com/open?id=0ByVMq5U43FGlei1oRjNKeXRZbE0

    now facing audio sync issues and second audio hearing low.

    Expected result is first video and second video merged side by side and audio should sync with merged video.

    Now I can able to get desired output using the below command

    ffmpeg -i first.mkv -i second.mkv -i first.mka -i second.mka -filter_complex "[0]scale=320:240,pad=645:240,setsar=1[l];[1]scale=320:240,setpts=PTS-STARTPTS+9.723/TB,setsar=1[1v];[l][1v]overlay=x=325[v];[3]adelay=9712|9712[1a];[2]adelay=31|31[2a];[2a][1a]amerge=inputs=2[a]" -map '[v]' -map '[a]' -c:v libx264 -preset slower -crf 0 -c:a aac -ac 2 merged.mp4

    but again facing following issues

    1. Second Video not encoded properly stuck in middle and playing.
    2. Audio Sync issues.
    3. Conversion process is slow. how can be above work done using hstack ?.

    any suggestions or help ?

  • using ffmpeg.js to play H265 video, only play hvc1 format, can't play hev1 format, is there anyway to play hev1 ?

    1er novembre 2019, par Alex

    I compiled ffmpeg into ffmpeg.js and ffmpeg.wasm by using emscripten, so that it can run in the browser and play H265 video.

    However, I found that only the hvc1 format can be played.
    Like that:

    Stream #0:0(und): Video: hevc (Main) (hvc1 / 0x31637668), yuv420p(tv, bt709, progressive), 1280x720, 854 kb/s, 25 fps, 25 tbr, 12800 tbn, 25 tbc (default)

    and the hev1 format cannot be played.Like that :

    Stream #0:0(und): Video: hevc (Main) (hev1 / 0x31766568), yuv420p(tv, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 829 kb/s, 25 fps, 25 tbr, 12800 tbn, 25 tbc (default)

    Web browser’s Console has this errors

    libffmpeg.js:1 [mov,mp4,m4a,3gp,3g2,mj2 @ 0x644b80] reached eof, corrupted CTTS atom
    >put_char @ libffmpeg.js:1
    >write @ libffmpeg.js:1
    >write @ libffmpeg.js:1
    >doWritev @ libffmpeg.js:1
    >___syscall146 @ libffmpeg.js:1
    >(anonymous) @ wasm-009a4202-743:1
    >(anonymous) @ wasm-009a4202-739:1
    >(anonymous) @ wasm-009a4202-741:1
    >(anonymous) @ wasm-009a4202-742:1
    >(anonymous) @ wasm-009a4202-437:1
    >(anonymous) @ wasm-009a4202-2083:1
    >(anonymous) @ wasm-009a4202-87:1
    >(anonymous) @ wasm-009a4202-923:1
    >(anonymous) @ wasm-009a4202-247:1
    >(anonymous) @ wasm-009a4202-247:1
    >(anonymous) @ wasm-009a4202-247:1
    >(anonymous) @ wasm-009a4202-247:1
    >(anonymous) @ wasm-009a4202-2864:1
    >(anonymous) @ wasm-009a4202-247:1
    >(anonymous) @ wasm-009a4202-3015:1
    >(anonymous) @ wasm-009a4202-247:1
    >(anonymous) @ wasm-009a4202-1073:1
    >(anonymous) @ wasm-009a4202-2261:1
    >(anonymous) @ wasm-009a4202-2562:1
    >(anonymous) @ libffmpeg.js:1
    >Decoder.openDecoder @ decoder.js:48
    >Decoder.processReq @ decoder.js:165
    >self.onmessage @ decoder.js:248
    libffmpeg.js:1 [mov,mp4,m4a,3gp,3g2,mj2 @ 0x644b80] error reading header
    >put_char @ libffmpeg.js:1
    >write @ libffmpeg.js:1
    >write @ libffmpeg.js:1
    >doWritev @ libffmpeg.js:1
    >___syscall146 @ libffmpeg.js:1
    >(anonymous) @ wasm-009a4202-743:1
    >(anonymous) @ wasm-009a4202-739:1
    >(anonymous) @ wasm-009a4202-741:1
    >(anonymous) @ wasm-009a4202-742:1
    >(anonymous) @ wasm-009a4202-437:1
    >(anonymous) @ wasm-009a4202-2083:1
    >(anonymous) @ wasm-009a4202-87:1
    >(anonymous) @ wasm-009a4202-1073:1
    >(anonymous) @ wasm-009a4202-2261:1
    >(anonymous) @ wasm-009a4202-2562:1
    >(anonymous) @ libffmpeg.js:1
    >Decoder.openDecoder @ decoder.js:48
    >Decoder.processReq @ decoder.js:165
    >self.onmessage @ decoder.js:248
    common.js:58 [2019-11-1 18:23:25:529][Decoder][IF] openDecoder return 8

    ffplay can play these two, why can only play hvc1 after compiling into ffmpeg.js and ffmpeg.wasm ?