Recherche avancée

Médias (1)

Mot : - Tags -/ticket

Autres articles (43)

  • Qualité du média après traitement

    21 juin 2013, par

    Le bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
    Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • La file d’attente de SPIPmotion

    28 novembre 2010, par

    Une file d’attente stockée dans la base de donnée
    Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
    Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)

Sur d’autres sites (8530)

  • Can I get pictures/stills/photos from inside a container file from a CD-I disc ?

    8 décembre 2017, par user9047197

    I have ffmpeg setup.

    Is there a way to extract pictures/stills/photos (etc) from a container (file) that’s from an old CD-I game that I have.

    I don’t want to extract the audio nor video. And I don’t want frames from the videos either.

    I want the bitmaps (etc) from INSIDE that container file.

    I know my Windows 8.1 PC can’t read inside that container file - so I’m hoping there’s a way to extract all the files (I want) instead using ffmpeg.

    (IsoBuster only gives the audio and video so I know already about IsoBuster.)

    I think there are no individual headers for the pictures/stills/photos, etc.

    Here’s what ExifTool decoded the file as :

    ExifTool Version Number (10.68)
    File Name (green.3t)
    File Size (610 MB)
    File Permissions (rw-rw-rw-)
    File Type (MPEG)
    File Type Extension (mpg)
    MIME Type (video/mpeg)
    MPEG Audio Version (1)
    Audio Layer (2)
    Audio Bitrate (80 kbps)
    Sample Rate (44100)
    Channel Mode (Single Channel)
    Mode Extension (Bands 4-31)
    Copyright Flag (False)
    Original Media (False)
    Emphasis (None)
    Image Width (368)
    Image Height (272)
    Aspect Ratio (1.0695)
    Frame Rate (25 fps)
    Video Bitrate (1.29 Mbps)
    Duration (1:02:12 approx)
    Image Size (368x272)
    Megapixels (0.100)

    Thank you for reading and - help !! :D

  • How to use ffmpeg for live streaming fragmented mp4 ?

    2 mai 2018, par Cross_

    Following a variety of stackoverflow suggestions I am able to create a fragmented MP4 file, then slice it into the header part (FTYP & MOOV) and various segment files (each containing MOOF & MDAT). Using Media Source Extensions I download and add the individual segments - that’s all working great.

    Now I would like to create a live streaming webcam with that same approach. I had hoped that I could just send the MOOV box to each new client plus the currently streaming segment. This however is rejected as invalid data in the browser. I have to start with the first segment and they have to be appended in order. That’s not helpful for a live streaming scenario where you don’t want to see the whole stream from the start. Is there any way to alter the files so that the segments are truly independent and you can start playback from the middle ?

    For reference, this is how I am setting up the stream on the OS X server :

    $ ffmpeg -y -hide_banner -f avfoundation -r 30 -s 1280x720
    -pixel_format yuyv422 -i default -an -c:v libx264 -profile:v main -level 3.2 -preset medium -tune zerolatency -flags +cgop+low_delay -movflags empty_moov+omit_tfhd_offset+frag_keyframe+default_base_moof+isml
    -pix_fmt yuv420p | split_into_segments.py

    Playback is done with a slightly modified version of this sample code :
    https://github.com/bitmovin/mse-demo/blob/master/index.html

  • Precisely trim a video with ffmpeg

    14 août 2021, par user16665676

    I want to precisely trim a video using ffmpeg (between keyframes), while keeping the original quality and audio in sync. I understand that using -c copy will do this and re-encoding is necessary. Rather than re-encoding the entire trim duration, I have a solution :

    


    assuming the following :

    


      

    • trim start is 00:00:10
    • 


    • trim end is 00:00:20
    • 


    • closest keyframe to trim start is 00:00:10.5
    • 


    


    trim from the specified start to the first keyframe (keyframes are provided by ffprobe) :

    


    ffmpeg -ss 00:00:10 -i input.mp4 -t 0.5 -enc_timebase -1 -vsync 2 1.mp4

    


    -enc_timebase -1 -vsync 2 are necessary otherwise 2.mp4 has a much slower playback speed (though audio remains the same and subsequently finishes much sooner than the video).

    


    trim, using -c copy, from the keyframe above to the end time :

    


    ffmpeg -ss 00:00:10.5 -t 00:00:09.5 input.mp4 -c copy 2.mp4

    


    finally, concatenate both of the videos :

    


    ffmpeg -f concat -i input.txt -c copy output.mp4

    


    This process works great and is quick as I'm only re-encoding up to the first keyframe (usually less than a second). However, when concatenating both the trimmed (1.mp4) and copied (2.mp4) there is a noticeable jitter (in both audio and video) on the concatenated output where the first keyframe starts.

    


    After extracting each individual frame from both videos, its evident that the keyframe is being included twice, once at the end of the trimmed (1.mp4) and once at the start of the copied (2.mp4). Also, if I don't include -enc_timebase -1 -vsync 2 and the output is played back in "slow motion" on the copied (2.mp4) part of the output, the first keyframe is on screen for much longer than the others. I believe this to be the cause.

    


    Is there a better way to do this process ? Am I messing up any timings/syncing ? I'm not entirely sure if the -enc_timebase -1 -vsync 2 are the right options to use.