Recherche avancée

Médias (0)

Mot : - Tags -/publication

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (80)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (7422)

  • Decoding RIMM streaming file format

    10 septembre 2011, par Thomas

    I want to decode the video (visual) frames within a Blackberry RIMM file. So far I have a parser, and some corresponding container documentation from RIM. 

    The video codec is H264 and is explicitly set on the device using one of the video.encodings properties. However, FFMPEG is not able to decode the frames and this is driving me nuts.

    Edit 1 : The issues seems to be lack of SPS and PPS in the frames, and artificially inserting them have proven unsuccessful so far (all grey image). Blackberry 9700 sends

    0x00 0x00 0x ?? 0x ?? 0xType

    where Type is according to table 7-1 in the H264 spec (I and P frames). We believe the 0x ?? 0x ?? represent the size of the frame, however the size does not always correspond to the size found by the parser (the parser seems to be working correctly).

    I have a windows decoder codec from blackberry, called mc_demux_mp2_ds.ax, and can play some MPEG-4 files captured the same way, but it is a binary for windows. And the H264 files will not play either way. I am aware of previous attempts. The capture url for javax.microedition.media.Manager is

    encoding=video-3gpp_width=176_height=144_video_codec=H264_audio_codec=AAC

    and I am writing to an output stream. Some example files here.

    Edit 2 :Turns out that about 3-4 of the 12-15 available video capture modes are flat out failing and refusing to output data, even in the simplest of test applications. So any working solution should implement MPEG-4, H264 and H263 in both AMR and AAC, in so getting fallback alternatives when one sound codec and/or resolution fails. Reboots, hangs and what not litters the Blackberry video implementation and vary from firmware to firmware ; total suckage.

  • ffmpeg : add a data size threshold for muxing queue size

    15 octobre 2020, par Jan Ekström
    ffmpeg : add a data size threshold for muxing queue size
    

    This way the old max queue size limit based behavior for streams
    where each individual packet is large is kept, while for smaller
    streams more packets can be buffered (current default is at 50
    megabytes per stream).

    For some explanation, by default ffmpeg copies packets from before
    the appointed seek point/start time and puts them into the local
    muxing queue. Before, it getting utilized was much less likely
    since as soon as the filter chain was initialized, the encoder
    (and thus output stream) was also initialized.

    Now, since we will be pushing the encoder initialization to when the
    first AVFrame is decoded and filtered - which only happens after
    the exact seek point is hit as packets are ignored until then -
    this queue will be seeing much more usage.

    In more layman's terms, this attempts to fix cases such as where :
    - seek point ends up being 5 seconds before requested time.
    - audio is set to copy, and thus immediately begins filling the
    muxing queue.
    - video is being encoded, and thus all received packets are skipped
    until the requested time is hit.

    • [DH] doc/ffmpeg.texi
    • [DH] fftools/ffmpeg.c
    • [DH] fftools/ffmpeg.h
    • [DH] fftools/ffmpeg_opt.c
  • FFMPEG H264 with custom overlay per frame

    4 octobre 2020, par La bla bla

    We have a stream that is stored in the cloud (Amazon S3) as individual H264 frames. The frames are stored as framexxxxxx.264, the numbering doesn't start from 0 but rather from some larger number, say 1000 (so, frame001000.264)

    


    The goal is to create a mp4 clip which is either timelapse or just faster for inspection and other checking (much faster, compressing around 3 hours of video down to < 20 minutes), this also requires we overlay the frame number (the filename) on the frame itself

    &#xA;

    At first I was creating a timelapse by pulling from S3 only the keyframes (i-frames ? still rather new to codecs & stuff) and overlaying the filename on them and saving as png (which probably isn't needed, but that's what I did) using (this command is used inside a python script)

    &#xA;

    ffmpeg -y -i {h264_name} -vf \"scale=1920:-1, &#xA;drawtext=fontfile=/usr/share/fonts/truetype/ubuntu-font-family/Ubuntu-B.ttf:fontsize=34:text={txt}:fontcolor=white:x=50:y=50:bordercolor=black:borderw=2\" &#xA;-c:a copy -pix_fmt yuv420p {basename}.png&#xA;

    &#xA;

    after this I combined all the frames by using python to convert the lowest numbered frame to 0.png and incrementing (so it would be continuous, because I only used keyframes the numbers originally weren't sequential) and running

    &#xA;

    ffmpeg -y -f image2 -i %d.png -r {self.params.fps} -vcodec libx264 -crf {self.params.crf} -pix_fmt yuv420p {out_file}&#xA;

    &#xA;

    and this worked great, but the difference between keyframes was too long to allow for proper inspection

    &#xA;

    so now for the question(s)

    &#xA;

    since I know frames that are not keyframes (p-frames ?) can't be used alone by ffmpeg, the method of overlaying the file name and converting it to png (or keep as h264, same thing) won't work, or at least, I couldn't find a way for it to work, maybe there's a way to specify a frame's keyframe ?, how can one overlay the filename (and not the frame number as shown here for example)

    &#xA;

    Also, is it possible to skip some p-frames between the keyframes ? (so if a keyframe is every 30 frames, we would take a keyframe, a frame 15 frames later, and next another keyframe)

    &#xA;

    I thought about using ffmpeg's pipe option to feed it with the files as they're being downloaded, but I'm not sure if I can specify drawtext this way

    &#xA;

    Also, if there's another alternative that can achieve that (at first I was converting to png, using python and OpenCV to add the filename and then merging the pngs to mp4, but then I found drawtext can do that in a single command so I used it)

    &#xA;