Recherche avancée

Médias (1)

Mot : - Tags -/MediaSPIP 0.2

Autres articles (71)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

  • Qualité du média après traitement

    21 juin 2013, par

    Le bon réglage du logiciel qui traite les média est important pour un équilibre entre les partis ( bande passante de l’hébergeur, qualité du média pour le rédacteur et le visiteur, accessibilité pour le visiteur ). Comment régler la qualité de son média ?
    Plus la qualité du média est importante, plus la bande passante sera utilisée. Le visiteur avec une connexion internet à petit débit devra attendre plus longtemps. Inversement plus, la qualité du média est pauvre et donc le média devient dégradé voire (...)

Sur d’autres sites (11609)

  • avformat_open_input fails only with a custom IO context

    19 janvier 2017, par Tim

    Running into an odd issue with avformat_open_input, it is failing with :

    Invalid data found when processing input

    But this only happens when I attempt to read the file using a custom AVIOContext.

    My custom code is as follows (error checking omitted for clarity) :

    auto fmtCtx = avformat_alloc_context();
    auto ioBufferSize = 32768;
    auto ioBuffer = (unsigned char *)av_malloc(ioBufferSize);
    auto ioCtx = avio_alloc_context(ioBuffer,
                                   ioBufferSize,
                                   0,
                                   reinterpret_cast<void>(this),
                                   &amp;imageIORead,
                                   NULL,
                                   &amp;imageIOSeek));

    fmtCtx -> pb = ioCtx;
    fmtCtx -> flags |= AVFMT_FLAG_CUSTOM_IO;

    int err = avformat_open_input(&amp;fmtCtx, NULL, NULL, NULL);
    </void>

    imageIOSeek is never called, but properly handles the whence parameter including the AVSEEK_SIZE option. My file data is already loaded in memory, so imageIORead is trivial (returning 0 at EOF) :

    int imageIORead(void *opaque, uint8_t *buf, int buf_size) {
       Image *d = (Image *)buf;
       int rc = std::min(buf_size, static_cast<int>(d->data.size() - d->pos));

       memcpy(buf, d->data.data() + d->pos, rc);
       d->pos += rc;
       return rc;
    }
    </int>

    The data being read is loaded from a file on disk :

    /tmp/25.jpeg

    The following code is able to open and extract the image correctly :

    auto fmtCtx = avformat_alloc_context();
    int err = avformat_open_input(&amp;fmtCtx, "/tmp/25.jpeg", NULL, NULL);

    The project is using a minified version of libavformat including only the formats we need. I don’t believe this is the cause of the problem since the file can be open and handled properly when the path is specified. I haven’t seen any configure options specifically targeting support for custom IO contexts.

    This is the image in question : 25.jpeg

  • How to pipe live video frames from ffmpeg to PIL ?

    30 janvier 2017, par Ryan Martin

    I need to use ffmpeg/avconv to pipe jpg frames to a python PIL (Pillow) Image object, using gst as an intermediary*. I’ve been searching everywhere for this answer without much luck. I think I’m close - but I’m stuck. Using Python 2.7

    My ideal pipeline, launched from python, looks like this :

    1. ffmpeg/avconv (as h264 video)
    2. Piped ->
    3. gst-streamer (frames split into jpg)
    4. Piped ->
    5. Pil Image Object

    I have the first few steps under control as a single command that writes .jpgs to disk as furiously fast as the hardware will allow.

    That command looks something like this :

    command = [
           "ffmpeg",
           "-f video4linux2",
           "-r 30",
           "-video_size 1280x720",
           "-pixel_format 'uyvy422'",
           "-i /dev/video0",
           "-vf fps=30",
           "-f H264",
           "-vcodec libx264",
           "-preset ultrafast",
           "pipe:1 -",
           "|", # Pipe to GST
           "gst-launch-1.0 fdsrc !",
           "video/x-h264,framerate=30/1,stream-format=byte-stream !",
           "decodebin ! videorate ! video/x-raw,framerate=30/1 !",
           "videoconvert !",
           "jpegenc quality=55 !",
           "multifilesink location=" + Utils.live_sync_path + "live_%04d.jpg"
         ]

    This will successfully write frames to disk if ran with popen or os.system.

    But instead of writing frames to disk, I want to capture the output in my subprocess pipe and read the frames, as they are written, in a file-like buffer that can then be read by PIL.

    Something like this :

       import subprocess as sp
       import shlex
       import StringIO

       clean_cmd = shlex.split(" ".join(command))
       pipe = sp.Popen(clean_cmd, stdout = sp.PIPE, bufsize=10**8)

       while pipe:

           raw = pipe.stdout.read()
           buff = StringIO.StringIO()
           buff.write(raw)
           buff.seek(0)

           # Open or do something clever...
           im = Image.open(buff)
           im.show()

           pipe.flush()

    This code doesn’t work - I’m not even sure I can use "while pipe" in this way. I’m fairly new to using buffers and piping in this way.

    I’m not sure how I would know that an image has been written to the pipe or when to read the ’next’ image.

    Any help would be greatly appreciated in understanding how to read the images from a pipe rather than to disk.

    • This is ultimately a Raspberry Pi 3 pipeline and in order to increase my frame rates I can’t (A) read/write to/from disk or (B) use a frame by frame capture method - as opposed to running H246 video directly from the camera chip.
  • C++ Extracting a h264 Subsequence from Byte Stream

    10 janvier 2017, par Simon

    I have a raw h.264 byte stream coming from an RTSP network camera. In order to get the byte stream, I catch the piped output from ffmpeg using popen() :

    ffmpeg -i rtsp://address -c:v copy -an -c:v copy -an -f h264 pipe:1

    At some point in time, I would like to start recording from the stream for a while (and save everything to an mp4 file). I want to achieve this without decoding the stream to an intermediate format (e.g., yuv420p) and encoding it back. As a first test, I just started writing the output buffer to disk after a couple of seconds. Then, I can encode the video again using

    ffmpeg -i cam.h264 -c:v h264 -an -f copy cam_out.mp4

    Here, ffmpeg complains that the first part of the data is corrupted (it still seems to be able to recover from this as it just throws away the corrupted parts). This of course makes sense as I simply start recording without looking for the start of frames etc.. Ideally, I would like to start and stop recording at the correct parts of the stream. I had a small glimpse on the h.264 format and the NAL units. Is there some simple way of detecting "good" positions in the stream to start recording ?