Recherche avancée

Médias (91)

Autres articles (99)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Amélioration de la version de base

    13 septembre 2013

    Jolie sélection multiple
    Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
    Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)

Sur d’autres sites (12347)

  • Why does FFMPEG b-frames and b_pyramid offset start_pts (and start_time) in fragmented output ?

    13 juillet 2022, par Vans S

    It seems when transcoding something into fragmented or segmented output the start_pts is not 0, and no combination of options can make it 0 other than setting bframes to 0 and bpyramid to 0. This does not happen with regular nonfragmented output.

    


    Does anyone know why this is, and how to prevent it as I believe this is causing weird timesync issues with playback in browsers (every fragment slightly delays the video more and more) where after 2-3 hours the stream can end up 15minutes+ delayed.

    


    Example where each segment start_pts is not 0 :

    


    ffmpeg -i in.mp4 -pix_fmt yuv420p -an -f yuv4mpegpipe -frames:v 150 - | ffmpeg -f yuv4mpegpipe -i - -y -force_key_frames 1,2,3,4 -map 0 -codec:v libx264 -f segment -segment_list out.csv -segment_times 2,4 -segment_time_delta 0.05 -preset:v fast -segment_format_options movflags=+faststart out%03d.mp4


    


    start_pts is 0 here if we add :

    


    -x264opts b_pyramid=0 -bf 0 
#or change codec to
-codec:v mpeg4
#or output regular mp4
ffmpeg -i in.mp4 -pix_fmt yuv420p -an -f yuv4mpegpipe -frames:v 150 - | ffmpeg -f yuv4mpegpipe -i - -y out.mp4


    


    EDIT : Looking into this further I am starting to think this is a bug with how empty_moov interacts with the negative_cts_offsets flag. (when empty_moov is used, negative_cts_offsets seems to be ignored, and we need empty_moov for full webbrowser support.)

    


  • FFmpeg, videotoolbox and avplayer in iOS

    9 janvier 2017, par Hwangho Kim

    I have a question how these things are connected and what they exactly do.

    FYI, I have a few experience about video player and encoding and decoding.

    In my job I deal udp streaming from server and take it with ffmpeg and decodes it and draw it with openGL. And also using ffmpeg for video player.

    These are the questions...

    1. Only ffmpeg can decodes UDP streaming (encoded with ffmpeg from the server) or not ?

    I found some useful information about videotoolbox which can decode streaming with hardware acceleration in iOS. so could I also decode the streaming from the server with videotoolbox ?

    2. If it is possible to decode with videotoolbox (I mean if the videotoolbox could be the replacement for ffmpeg), then what is the videotoolbox source code in ffmpeg ? why it is there ?

    In my decoder I make AVCodecContext from the streaming and it has hwaccel and hwaccel_context field which set null both of them. I thought this videotoolbox is kind of API which can help ffmpeg to use hwaccel of iOS. But it looks not true for now...

    3. If videotoolbox can decode streaming, Does this also decode for H264 in local ? or only streaming possible ?

    AVPlayer is a good tool to play a video but if videotoolbox could replace this AVPlayer then, what’s the benefit ? or impossible ?

    4. FFmpeg only uses CPU for decoding (software decoder) or hwaccel also ?

    When I play a video with ffmpeg player, CPU usage over 100% and Does it means this ffmpeg uses only software decoder ? or there is a way to use hwaccel ?

    Please understand my poor english and any answer would be appreciated.

    Thanks.

  • python 3 using ffmpeg in a subprocess getting stderr decoding error

    4 mai 2024, par jdauthre

    I am running ffmpeg as a subprocess and using the stderr to get various bits of data like the subtitles stream Id's. It works fine for most videos, but one with japanese subtitles results in an error :

    


    'charmap' codec can't decode byte in position xxx : character maps to

    


    Much googling suggests the problem is that the japanese requires unicode, whereas English does not. Solutions offered refer to problems with files, and I cannot find a way of doing the same with the stderr. Relevent Code is below :

    


    command = [ffmpeg,"-y","-i",fileSelected,"-acodec","pcm_s16le",
                  "-vn","-t","3", "-f",            "null","-"]
print(command)   
proc = subprocess.Popen(command,stderr=PIPE,Jstdin=subprocess.PIPE,
                            universal_newlines=True,startupinfo=startupinfo)
  
stream = ""    
for line in proc.stderr:
    try:
        print("line",line)
    except exception as error:
        print("print",error)
        line = line[:-1]
        if "Stream #" in line:
            estream = line.split("#",1)[1]
            estream =estream.split(" (",1)[0]
            print("estream",estream)
            stream = stream + estream +"\n"  #.split("(",1)[0] 
            stream = stream + estream +"\n"