Recherche avancée

Médias (10)

Mot : - Tags -/wav

Autres articles (53)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (5703)

  • FFmpeg compose, multi layers and filters

    10 octobre 2019, par jadeshohy

    Pretty new to FFmpeg. We would like to use FFmpeg as a important part of an AR project.

    Currently, we find it is not easy for us.

    We want to compose the footages with FFmpeg.

    We got 5 layers, wanted to blend them with specific mode, like the things in After Effects.

    • layer-1/ [A.webm] video,vp9 codec, which has a transparent BG,has to be added as [normal mode]

    • layer-2/ [B.mp4] video, optical-flare things with black BG,has to be added as [screen mode]

    • layer-3/ [C.mp4] video, some motion graphic things with light BG,has to be added as [overlay mode]

    • layer-4/ [BG.MP4], backgound things, has to be added as [normal mode]

    After we blend those 4 (like pre-compose,use blend filter), we want to add another layer-5/[icon.png] which is the special icon.

    Layer-5 need to overlay the pre-compose. We have to overlay it at the special position (use overlay filter ?).

    Cause [icon.png] may change frequently. we want to deal with that after the 4 layer blending.

    But at the first step, when we set normal mode for layer-1 in blend filter, layer-1 [A.webm] lost the transparent BG,it gave us a black BG which block all other things.
    Blend filter can not handle the alpha channel of vp9 webm ?
    When we set the mode of layer-1 to screen mode,the translucent thing was not what we need.

    Could you please give us some commands to achieve the blend above ?

    The commands that really work will be extremely useful for our FFmpeg initiation.

    ffmpeg -c:v libvpx-vp9 -i transparent.webm -i bg.mp4 -filter_complex "[0:v]format=yuva420p [a]; [1:v]format=yuv420p [b]; [a][b]blend=all_mode='normal':shortest=1:all_opacity=1,format=yuv420p" output.mp4 >log
    ffmpeg version 4.1.4 Copyright (c) 2000-2019 the FFmpeg developers
     built with Apple LLVM version 10.0.1 (clang-1001.0.46.4)
     configuration: --prefix=/usr/local/Cellar/ffmpeg/4.1.4_1 --enable-shared --enable-pthreads --enable-version3 --enable-avresample --cc=clang --host-cflags='-I/Library/Java/JavaVirtualMachines/adoptopenjdk-12.0.1.jdk/Contents/Home/include -I/Library/Java/JavaVirtualMachines/adoptopenjdk-12.0.1.jdk/Contents/Home/include/darwin' --host-ldflags= --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libbluray --enable-libmp3lame --enable-libopus --enable-librubberband --enable-libsnappy --enable-libtesseract --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librtmp --enable-libspeex --enable-videotoolbox --disable-libjack --disable-indev=jack --enable-libaom --enable-libsoxr
     libavutil      56. 22.100 / 56. 22.100
     libavcodec     58. 35.100 / 58. 35.100
     libavformat    58. 20.100 / 58. 20.100
     libavdevice    58.  5.100 / 58.  5.100
     libavfilter     7. 40.101 /  7. 40.101
     libavresample   4.  0.  0 /  4.  0.  0
     libswscale      5.  3.100 /  5.  3.100
     libswresample   3.  3.100 /  3.  3.100
     libpostproc    55.  3.100 / 55.  3.100
    [libvpx-vp9 @ 0x7f8876008600] v1.8.0
       Last message repeated 1 times
    Input #0, matroska,webm, from 'transparent.webm':
     Metadata:
       encoder         : Chrome
     Duration: N/A, start: 0.000000, bitrate: N/A
       Stream #0:0(eng): Video: vp9 (Profile 0), yuva420p(tv), 640x360, SAR 1:1 DAR 16:9, 60 fps, 60 tbr, 1k tbn, 1k tbc (default)
       Metadata:
         alpha_mode      : 1
    Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'bg.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf57.83.100
     Duration: 00:00:04.00, start: 0.000000, bitrate: 728 kb/s
       Stream #1:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x360, 725 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
       Metadata:
         handler_name    : VideoHandler
    [libvpx-vp9 @ 0x7f8877806600] v1.8.0
    Stream mapping:
     Stream #0:0 (libvpx-vp9) -> format
     Stream #1:0 (h264) -> format
     format -> Stream #0:0 (libx264)
    Press [q] to stop, [?] for help
    [libvpx-vp9 @ 0x7f8877806600] v1.8.0
    [libx264 @ 0x7f8877817200] using SAR=1/1
    [libx264 @ 0x7f8877817200] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
    [libx264 @ 0x7f8877817200] profile High, level 3.1
    [libx264 @ 0x7f8877817200] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'output.mp4':
     Metadata:
       encoder         : Lavf58.20.100
       Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 640x360 [SAR 1:1 DAR 16:9], q=-1--1, 60 fps, 15360 tbn, 60 tbc (default)
       Metadata:
         encoder         : Lavc58.35.100 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
    frame=  239 fps=113 q=-1.0 Lsize=     232kB time=00:00:03.93 bitrate= 482.5kbits/s dup=1 drop=2 speed=1.86x
    video:228kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.586669%
    [libx264 @ 0x7f8877817200] frame I:1     Avg QP:20.55  size:  5385
    [libx264 @ 0x7f8877817200] frame P:62    Avg QP:24.42  size:  2373
    [libx264 @ 0x7f8877817200] frame B:176   Avg QP:31.31  size:   456
    [libx264 @ 0x7f8877817200] consecutive B-frames:  1.3%  0.8%  2.5% 95.4%
    [libx264 @ 0x7f8877817200] mb I  I16..4: 18.6% 68.4% 13.0%
    [libx264 @ 0x7f8877817200] mb P  I16..4:  1.6%  4.0%  0.7%  P16..4: 14.8%  7.0%  4.5%  0.0%  0.0%    skip:67.5%
    [libx264 @ 0x7f8877817200] mb B  I16..4:  0.2%  0.0%  0.0%  B16..8: 17.4%  2.5%  0.4%  direct: 0.5%  skip:78.9%  L0:53.1% L1:40.4% BI: 6.6%
    [libx264 @ 0x7f8877817200] 8x8 transform intra:60.1% inter:60.4%
    [libx264 @ 0x7f8877817200] coded y,uvDC,uvAC intra: 16.6% 27.4% 10.7% inter: 3.0% 2.2% 0.1%
    [libx264 @ 0x7f8877817200] i16 v,h,dc,p: 56% 37%  6%  2%
    [libx264 @ 0x7f8877817200] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 40%  6% 48%  1%  1%  1%  1%  1%  1%
    [libx264 @ 0x7f8877817200] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 35% 22% 23%  3%  3%  4%  3%  4%  3%
    [libx264 @ 0x7f8877817200] i8c dc,h,v,p: 57% 20% 21%  2%
    [libx264 @ 0x7f8877817200] Weighted P-Frames: Y:0.0% UV:0.0%
    [libx264 @ 0x7f8877817200] ref P L0: 69.3% 12.8% 13.6%  4.3%
    [libx264 @ 0x7f8877817200] ref B L0: 92.9%  5.9%  1.1%
    [libx264 @ 0x7f8877817200] ref B L1: 96.1%  3.9%
    [libx264 @ 0x7f8877817200] kb/s:467.59
  • ClickOnce Deployment looking for manifest signature on an nuget sourced .exe (FFMPEG #) as if it is a .csproj output. Deployment fails

    9 octobre 2019, par M_Ryce

    I have a C# forms app which utilizes the nuget package "FFMPEG Sharp" (Nuget Github) to generate video from a sequence of images.

    Unlike most nuget packages which simply pull in a .dll, installing FFMPEG Sharp places an "FFMPEG" folder into the .csproj root directory, in addition to bringing the appropriate .dll into "packages"

    Inside this folder are a few FFMPEG artifacts and a /bin folder containing FFMPEG executables. According to the project’s Github readme, this /bin directory needs to be specified in the app.config.

    From Github Readme example :

    <appsettings>
     <add key="ffmpegRoot" value="C:\ffmpeg\bin\"></add>
    </appsettings>`

    Adjusting the above to work in alignment with the default location Nuget placed the dependency artifacts :

    <appsettings>
     <add key="ffmpegRoot" value="..\..\FFMPEG\bin"></add>
    </appsettings>`

    Everything related to this dev effort has been smooth sailing, until I tried to utilize the existing Clickonce deployment for the app. The FFMPEG folder in my .csproj root wasn’t making it to the build output and therefore the application’s call to the FFMPEG .exe was throwing a null reference error. Understandable result, given that I had not set up any method of ensuring the FFMPEG artifacts made it to the build output with the same folder structure as existed on my local dev box.

    To counter this, I set a POST-build command to XCOPY....

    XCOPY "$(SolutionDir)MyApp\FFMPEG" "$(TargetDir)FFMPEG" /S /Y /I

    ...the nuget-provisioned FFMPEG artifacts into the build output root, and adjusted the config setting accordingly (see below)

    <appsettings>
     <add key="ffmpegRoot" value=".\FFMPEG\bin"></add>
    </appsettings>`

    This worked like a dream when building/running locally. The XCOPY succeeded in placing FFMPEG folder contents into the compiled solution’s Debug/Release bin and the updated config referenced them. No errors.

    Attempting to deploy with the .NET ClickOnce tool has created a rather befuddling error though.

    (Apologies for formatting ugliness below. I tried but didn’t succeed. The important parts are in bold)

    ERROR SUMMARY
    Below is a summary of the errors, details of these errors are listed later in the log.
    Activation of https://MySite/MyApp/Install/MyApp.application resulted in exception. Following failure messages were detected :
    + Downloading https://MySite/MyApp/Install/Application Files/MyApp/FFMPEG/bin/x86/ffmpeg.exe.deploy did not succeed.
    + The remote server returned an error : (404) Not Found.

    COMPONENT STORE TRANSACTION FAILURE SUMMARY
    No transaction error was detected.
    WARNINGS
    The manifest for this application does not have a signature. Signature validation will be ignored.
    * The manifest for this application does not have a signature. Signature validation will be ignored.
    OPERATION PROGRESS STATUS
    * [10/8/2019 2:03:37 PM] : Activation of https://MySite/MyApp/Install/MyApp.application has started.
    * [10/8/2019 2:03:37 PM] : Processing of deployment manifest has successfully completed.
    * [10/8/2019 2:03:37 PM] : Installation of the application has started.
    * [10/8/2019 2:03:37 PM] : Processing of application manifest has successfully completed.
    * [10/8/2019 2:03:40 PM] : Found compatible runtime version 4.0.30319.
    * [10/8/2019 2:03:40 PM] : Request of trust and detection of platform is complete.
    ERROR DETAILS
    Following errors were detected during this operation.
    * [10/8/2019 2:03:40 PM] System.Deployment.Application.DeploymentDownloadException (Unknown subtype)
    - Downloading https://MySite/MyApp/Install/Application Files/MyApp/FFMPEG/bin/x86/ffmpeg.exe.deploy did not succeed.*

    ...

    My interpretation of this is that the ClickOnce deployment is treating the Nuget-sourced .exe’s as if they are compiled code from this very project, and checking for a signed manifest.

    This ClickOnce deployment was not set up by me, and had not needed to account for such external artifacts existing in the output previously. I do not believe turning off signed assemblies is an option for me, for security reasons.

    Is there a way to make ClickOnce deployments ignore a specific .exe when checking for signed manifests ? I think the "correct" intended usage is for FFMPEG to be pre-installed on the machine as a stand-alone application, but This is not an option for me at this time. I will need FFMPEG to be brought in by the ClickOnce.

  • Preprocessing of Columbia active speaker dataset

    21 octobre 2019, par aspit794

    I would like to use the Columbia dataset (which is 1 video https://youtu.be/6GzxbrO0DHM) and its ground truth annotation (http://www.jaychakravarty.com/wp-content/uploads/2016/02/columbiaDatasetGroundTruth.zip).

    annotation contain the frame_id (int), x and y position of the bounding box (int, int), the width=height of the bounding box (int), and the ground truth for active speaker (0 or 1)

    More information : https://www.jaychakravarty.com/active-speaker-detection/

    I managed to download the video, extracted the frames with

    ffmpeg -loglevel 0 -i columbia.mkv columbia/frame_%06d.jpg -nostdin -vf -an -hide_banner

    and draw the ground truth bounding boxes to the video frames (where available).

    However, the bounding boxes are not in the correct places, and I am a bit helpless.

    I tried to convert the video to other frame rate (e.g. 25 fps), but the annotation is not fit.

    I read every paper available, which is related to this dataset, but neither of them mention any information about preprocessing, and the codebase is not publicly available.

    How should I preprocess the video to interpret the annotation ?