Recherche avancée

Médias (9)

Mot : - Tags -/soundtrack

Autres articles (75)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

  • Qu’est ce qu’un masque de formulaire

    13 juin 2013, par

    Un masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
    Chaque formulaire de publication d’objet peut donc être personnalisé.
    Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
    Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (6862)

  • Ramp- Slow Motion in IOS

    26 juin 2013, par 2vision2

    I have to do "ramp-slow motion" in a video file along with audio, in-between some frames and need to store the ramped video as a new video.

    Ref : http://www.youtube.com/watch?v=BJ3_xMGzauk (watch from 0 to 10s)

    From my analysis, I've found that AVFoundation framework can be helpful.

    Ref :
    http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/00_Introduction.html

    Copy and pasted from the above link :

    "
    Editing
    AV Foundation uses compositions to create new assets from existing pieces of media (typically, one or more video and audio tracks). You use a mutable composition to add and remove tracks, and adjust their temporal orderings. You can also set the relative volumes and ramping of audio tracks ; and set the opacity, and opacity ramps, of video tracks. A composition is an assemblage of pieces of media held in memory. When you export a composition using an export session, it's collapsed to a file.
    On iOS 4.1 and later, you can also create an asset from media such as sample buffers or still images using an asset writer.

    "

    Questions :
    Can I "ramp-slow mote" the video/audio file using the AVFoundation framework ? Or Is there any other package available ? If i want to handle audio and video separately, please guide me how to do ?

  • ffmpeg got stuck while trying to crossfade merge two videos

    30 juin 2017, par Jeflopo

    I’m trying to do a crossfade merge (1s) between two videos. An intro (39secs duration) video with the main video. When I executed the command it started working without throwing errors but at some frame ffmpeg gets stuck.

    I read a lot of q/a here in stackoverflow, and the official docs but I can’t solve this so :

    This is the command :

    ffmpeg -i "inputs/intro.mp4" -i "inputs/240p.mp4" -an -filter_complex \
       "[0:v]trim=start=0:end=38,setpts=PTS-STARTPTS[firstclip]; \
       [0:v]trim=start=38:end=39,setpts=PTS-STARTPTS[fadeoutsrc]; \
       [1:v]trim=start=1,setpts=PTS-STARTPTS[secondclip]; \
       [1:v]trim=start=0:end=1,setpts=PTS-STARTPTS[fadeinsrc]; \
       [fadeinsrc]format=pix_fmts=yuva420p, fade=t=in:st=0:d=1:alpha=1[fadein]; \
       [fadeoutsrc]format=pix_fmts=yuva420p, fade=t=out:st=0:d=1:alpha=1[fadeout]; \
       [fadein]fifo[fadeinfifo]; \
       [fadeout]fifo[fadeoutfifo]; \
       [fadeoutfifo][fadeinfifo]overlay[crossfade]; \
       [firstclip][crossfade][secondclip]concat=n=3[output]; \
       [0:a][1:a] acrossfade=d=1 [audio]" -vcodec libx264 -map "[output]" -map "[audio]" "outputs/240p.mp4"

    Here’s the raw command (the exact command I used) :

    ffmpeg -i "inputs/intro.mp4" -i "inputs/240p.mp4" -an -filter_complex "[0:v]trim=start=0:end=38,setpts=PTS-STARTPTS[firstclip]; [0:v]trim=start=38:end=39,setpts=PTS-STARTPTS[fadeoutsrc]; [1:v]trim=start=1,setpts=PTS-STARTPTS[secondclip]; [1:v]trim=start=0:end=1,setpts=PTS-STARTPTS[fadeinsrc]; [fadeinsrc]format=pix_fmts=yuva420p, fade=t=in:st=0:d=1:alpha=1[fadein]; [fadeoutsrc]format=pix_fmts=yuva420p, fade=t=out:st=0:d=1:alpha=1[fadeout]; [fadein]fifo[fadeinfifo]; [fadeout]fifo[fadeoutfifo]; [fadeoutfifo][fadeinfifo]overlay[crossfade]; [firstclip][crossfade][secondclip]concat=n=3[output]; [0:a][1:a] acrossfade=d=1 [audio]" -vcodec libx264 -map "[output]" -map "[audio]" "outputs/240p.mp4"

    The "error" is reproducible with and without the -an and the acrossfade filters.

    This is the output :

    PS C:\scripts\ffmpeg> ffmpeg -i "inputs/intro.mp4" -i "inputs/240p.mp4" -an -filter_complex "[0:v]trim=start=0:end=38,setpts=PTS-STARTPTS[firstclip]; [0:v]trim=start=38:end=39,setpts=PTS-STARTPTS[fadeoutsrc]; [1:v]trim=start=1,setpts=PTS-STARTPTS[secondclip]; [1:v]trim=start=0:end=1,setpts=PTS-STARTPTS[fadeinsrc]; [fadeinsrc]format=pix_fmts=yuva420p, fade=t=in:st=0:d=1:alpha=1[fadein]; [fadeoutsrc]format=pix_fmts=yuva420p, fade=t=out:st=0:d=1:alpha=1[fadeout]; [fadein]fifo[fadeinfifo]; [fadeout]fifo[fadeoutfifo]; [fadeoutfifo][fadeinfifo]overlay[crossfade]; [firstclip][crossfade][secondclip]concat=n=3[output]; [0:a][1:a] acrossfade=d=1 [audio]" -map "[output]" -map "[audio]" "outputs/240p.mp4"
    ffmpeg version N-86669-gc1d1274 Copyright (c) 2000-2017 the FFmpeg developers
     built with gcc 7.1.0 (GCC)
     configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
     libavutil      55. 67.100 / 55. 67.100
     libavcodec     57.100.102 / 57.100.102
     libavformat    57. 75.100 / 57. 75.100
     libavdevice    57.  7.100 / 57.  7.100
     libavfilter     6. 94.100 /  6. 94.100
     libswscale      4.  7.101 /  4.  7.101
     libswresample   2.  8.100 /  2.  8.100
     libpostproc    54.  6.100 / 54.  6.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'inputs/intro.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf57.72.101
     Duration: 00:06:24.45, start: 0.000000, bitrate: 491 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 426x240 [SAR 1:1 DAR 71:40], 353 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 130 kb/s (default)
       Metadata:
         handler_name    : SoundHandler
    Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'inputs/240p.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf56.40.101
     Duration: 00:06:24.43, start: 0.000000, bitrate: 375 kb/s
       Stream #1:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 426x240 [SAR 1:1 DAR 71:40], 243 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Stream #1:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 125 kb/s (default)
       Metadata:
         handler_name    : SoundHandler
    Stream mapping:
     Stream #0:0 (h264) -> trim
     Stream #0:0 (h264) -> trim
     Stream #0:1 (aac) -> acrossfade:crossfade0
     Stream #1:0 (h264) -> trim
     Stream #1:0 (h264) -> trim
     Stream #1:1 (aac) -> acrossfade:crossfade1
     concat -> Stream #0:0 (libx264)
     acrossfade -> Stream #0:1 (aac)
    Press [q] to stop, [?] for help
    [libx264 @ 00000000026b2240] using SAR=1/1
    [libx264 @ 00000000026b2240] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
    [libx264 @ 00000000026b2240] profile High, level 2.1
    [libx264 @ 00000000026b2240] 264 - core 152 r2851 ba24899 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=7 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to 'outputs/240p.mp4':
     Metadata:
       major_brand     : isom
       minor_version   : 512
       compatible_brands: isomiso2avc1mp41
       encoder         : Lavf57.75.100
       Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 426x240 [SAR 1:1 DAR 71:40], q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
       Metadata:
         encoder         : Lavc57.100.102 libx264
       Side data:
         cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
       Stream #0:1: Audio: aac (LC) ([64][0][0][0] / 0x0040), 44100 Hz, stereo, fltp, 128 kb/s (default)
       Metadata:
         encoder         : Lavc57.100.102 aac
    frame=10369 fps=503 q=28.0 size=   24064kB time=00:06:55.68 bitrate= 474.2kbits/s speed=20.2x

    At frame 10000 it gets stuck... I waited for 1hour but it keeps stuck.

    I’ve updated ffmpeg :

    ffmpeg -version
    ffmpeg version N-86669-gc1d1274 Copyright (c) 2000-2017 the FFmpeg developers
    built with gcc 7.1.0 (GCC)
    configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
    libavutil      55. 67.100 / 55. 67.100
    libavcodec     57.100.102 / 57.100.102
    libavformat    57. 75.100 / 57. 75.100
    libavdevice    57.  7.100 / 57.  7.100
    libavfilter     6. 94.100 /  6. 94.100
    libswscale      4.  7.101 /  4.  7.101
    libswresample   2.  8.100 /  2.  8.100
    libpostproc    54.  6.100 / 54.  6.100

    I used these references :

  • FFMpeg concat streams

    9 juillet 2017, par chourizo

    I am trying to receive two H264 UDP streams from two cameras, and save them to one file (so they are always synchronized). I tried a lot of things, but it always says that there is no video on the second stream (although I can watch it).

    ffmpeg -probesize 20M -analyzeduration 20M -i udp://@127.0.0.1:1234 -probesize 20M -analyzeduration 20M -i udp://@127.0.0.1:1235 -filter_complex "[0:v]fps=15,scale=320:240,setsar=1/1,setpts=PTS-STARTPTS[v0]; [1:v]fps=15,scale=320:240,setsar=1/1,setpts=PTS-STARTPTS[v1]; [v0][v1]concat=n=2:v=1:a=0 [v0] [v1]" -map "[v0]" -map "[v1]" -threads 0 -y kk.ts

    Is it possible to apply the concat to real time streams, so we have a video with two programs as a result ?