Recherche avancée

Médias (91)

Autres articles (59)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Activation de l’inscription des visiteurs

    12 avril 2011, par

    Il est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
    Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
    Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...)

Sur d’autres sites (7248)

  • How to combine ffmpeg and imagemagick codes which work separately but don't work together ?

    3 octobre 2018, par senops

    These codes are getting picture dimensions and combining them one on top of the other.
    The first working code :

    @echo off&setlocal enabledelayedexpansion

    for /f "tokens=1,2 delims=:" %%x in ('identify -format %%w:%%h image_1.jpg') do set w1=%%x&set h1=%%y
    for /f "tokens=1,2 delims=:" %%x in ('identify -format %%w:%%h image_2.jpg') do set w2=%%x&set h2=%%y
    echo %w1% %h1% %w2% %h2%
    pause

    The second working code :

    @echo off&setlocal enabledelayedexpansion

    set /a w1=1000
    set /a w2=500
    set /a h1=700
    set /a h2=400

    if !w1! LSS !h1! (
     set "p1=oh/mdar:h='max(ih,main_h)'"
     If !w2! LSS !h2! (
       set "p2=oh/mdar:h='max(ih,main_h)'[1max][0max];[0max][1max]vstack"
     ) else (
       set "p2='max(iw,main_w)':h=ow/mdar[1max][0max];"
     )
    ) else (
     set "p1='max(iw,main_w)':h=ow/mdar"
     if !w2! LSS !h2! (
       set "p2=oh/mdar:h='max(ih,main_h)'[1max][0max];[0max][1max]vstack"
     ) else (
       set "p2='max(iw,main_w)':h=ow/mdar[1max][0max];[0max][1max]vstack"
     )  
    )

    ffmpeg -i image_1.jpg -i image_2.jpg -filter_complex "[0][1]scale2ref=w=!p1![0max][1ref];[1ref][0max]scale2ref=w=!p2!" -q:v 1 -y combined.jpg

    But when I put them together they don’t work :

    @echo off&setlocal enabledelayedexpansion

    for /f "tokens=1,2 delims=:" %%x in ('identify -format %%w:%%h image_1.jpg') do set w1=%%x&set h1=%%y
    for /f "tokens=1,2 delims=:" %%x in ('identify -format %%w:%%h image_2.jpg') do set w2=%%x&set h2=%%y

    if !w1! LSS !h1! (
     set "p1=oh/mdar:h='max(ih,main_h)'"
     If !w2! LSS !h2! (
       set "p2=oh/mdar:h='max(ih,main_h)'[1max][0max];[0max][1max]vstack"
     ) else (
       set "p2='max(iw,main_w)':h=ow/mdar[1max][0max];"
     )
    ) else (
     set "p1='max(iw,main_w)':h=ow/mdar"
     if !w2! LSS !h2! (
       set "p2=oh/mdar:h='max(ih,main_h)'[1max][0max];[0max][1max]vstack"
     ) else (
       set "p2='max(iw,main_w)':h=ow/mdar[1max][0max];[0max][1max]vstack"
     )  
    )

    ffmpeg -i image_1.jpg -i image_2.jpg -filter_complex "[0][1]scale2ref=w=!p1![0max][1ref];[1ref][0max]scale2ref=w=!p2!" -q:v 1 -y combined.jpg
  • lavc/mjpeg : Add profiles for MJPEG using SOF marker codes

    23 novembre 2017, par Mark Thompson
    lavc/mjpeg : Add profiles for MJPEG using SOF marker codes
    

    This is needed by later hwaccel code to tell which encoding process was
    used for a particular frame, because hardware decoders may only support a
    subset of possible methods.

    • [DH] libavcodec/avcodec.h
    • [DH] libavcodec/mjpegdec.c
    • [DH] libavcodec/version.h
    • [DH] tests/ref/fate/api-mjpeg-codec-param
  • How to force A/V sync using mkvmerge and external time-codes ?

    19 avril 2017, par b..

    Background

    I’m working on a project where video and audio are algorithmic interpretations of an MKV source file where I use ffmpeg -ss and -t to extract a particular region of audio and video to separate files. I use scene changes in the video in the audio process (i.e. the audio changes on video scene change), so sync is crucial.

    Audio is 48khz, using 512 sample blocks.
    Video is 23.976fps (I also tried 24).

    I store the frame onset of sceneChanges in a file in terms of cumulative blocks :

    blocksPerFrame = (48000 / 512) / 23.976
    sceneOnsetBlock = sceneOnsetFrame*blocksPerFrame

    I use these blocks in my audio code to treat the samples associated with each scene as a group.

    When I combine the audio and video back together (currently using ffmpeg to generate mp4(v) mp3(a) in an MKV container), the audio and video start off in sync but increasingly drifts until it ends up being 35 seconds off. The worst part is that the audio lag is nonlinear ! By non-linear, I mean that if I plot the lag against the location of that lag in time, I don’t get a line, but what you see in the image below). I can’t just shift or scale the audio to fit the video because of this nonlinearity. I cannot figure out the cause of this nonlinearly increasing audio delay ; I’ve double and triple checked my math.

    Cumulative lag against time

    Since I know the exact timing of scene changes, I should be able to generate "external timecodes" (from the blocks above) for mkvmerge to perfectly sync the output !

    Subquestions :

    1. Is this the best approach (beyond trying to figure out what went wrong in the first place) ? As I’m using my video frames as a
      reference, if I use the scene changes as timecodes for the audio,
      will it force the video to match the audio or vice versa ? I’m much less concerned with the duration than the sync. The video was much more laborious to produce, so I’d rather loose some sound than some frames.

    2. I’m not clear on what numbers to use in the timecodes file.
      According to mkvmerge documentation "For video this is exactly
      one frame, for audio this is one packet of the specific audio type."
      Since I’m using MP3, what is the packet size ? Ideally, I could specify a packetsize (in the audio-encoder ?) that matches my block size (512) to keep things consistent and simple. Can I do this with ffmpeg ?

    Thank you !