Advanced search

Medias (91)

Other articles (67)

  • MediaSPIP v0.2

    21 June 2013, by

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Mise à disposition des fichiers

    14 April 2011, by

    Par défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
    Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
    Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)

  • MediaSPIP version 0.1 Beta

    16 April 2011, by

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

On other websites (7958)

  • multithreaded client/server listener using ffmpeg to record video

    29 January 2014, by user1895639

    I've got a python project where I need to trigger start/stop of two Axis IP cameras using ffmpeg. I've gotten bits and pieces of this to work but can't put the whole thing together. A "listener" program runs on one machine that can accept messages from other machines to start and stop recordings.

    The listener responds to two commands only:

    START v:/video_dir/myvideo.mov
    STOP

    The START command is followed by the full path of a video file that it will record.
    When receiving a STOP command, the video recording should stop.

    I am using ffmpeg to attach to cameras, and manually doing this works:

    ffmpeg.exe -i rtsp://cameraip/blah/blah -vcodec copy -acodec copy -y c:\temp\output.mov

    I can attach to the stream and upon hitting 'q' I can stop the recording.

    What I'd like to be able to do is relatively simple, I just can't wrap my head around it:

    Listener listens
    When it receives a START signal, it spawns two processes to start recording from each camera
    When it receives a STOP signal, it sends the 'q' keystroke to each process to tell ffmpeg to stop recording.

    I've got the listener part, but I'm just not sure how to get the multithreaded part down:

    while True:
       client,address = s.accept()
       data = client.recv( size )
       if data:
           if data.startswith('START'):
               # start threads here
           elif data.startswith('STOP'):
               # how to send a stop to the newly-created processes?

    In the thread code I'm doing this (which may be very incorrect):

    subprocess.call('ffmpeg.exe -i "rtsp://cameraipstuff -vcodec copy -acodec copy -t 3600 -y '+filename)  

    I can get this process to spawn off and I see it recording, but how can I send it a "q" message? I can use a Queue to pass a stop message and then do something like

    win32com.client.Dispatch('WScript.Shell').SendKeys('q')

    but that seems awkward. Perhaps a pipe and sending q to stdin? Regardless, I'm pretty sure using threads is the right approach (as opposed to calling subprocess.call('ffmpeg.exe ...') twice in a row), but I just don't know how to tie things together.

  • doc/example: Add http multi-client example code

    25 July 2015, by Stephan Holljes
    doc/example: Add http multi-client example code
    

    Signed-off-by: Stephan Holljes <klaxa1337@googlemail.com>

    • [DH] doc/examples/Makefile
    • [DH] doc/examples/http_multiclient.c
  • How to use FFMPEG API to decode to client allocated memory

    25 March 2020, by VorpalSword

    I’m trying to use the FFMPEG API to decode into a buffer defined by the client program by following the tips in this question but using the new pattern for decoding instead of the now deprecated avcodec_decode_video2 function.

    If my input file is an I-frame only format, everything works great. I’ve tested with a .mov file encoded with v210 (uncompressed).

    However, if the input is a long-GoP format (I’m trying with H.264 high profile 4:2:2 in an mp4 file) I get the following pleasingly psychedelic/impressionistic result:

    Crowd run. On acid!

    There’s clearly something motion-vectory going on here!

    And if I let FFMPEG manage its own buffers with the H.264 input by not overriding AVCodecContext::get_buffer2, I can make a copy from the resulting frame to my desired destination buffer and get good results.

    Here’s my decoder method, _frame and _codecCtx are object members of type AVFrame* and AVCodecContext* respectively. They get alloc’d and init’d in the constructor.

           virtual const DecodeResult decode(const rv::sz_t toggle) override {
           _toggle = toggle &amp; 1;
           using Flags_e = DecodeResultFlags_e;
           DecodeResult ans(Flags_e::kNoResult);

           AVPacket pkt;   // holds compressed data
           ::av_init_packet(&amp;pkt);
           pkt.data = NULL;
           pkt.size = 0;
           int ret;

           // read the compressed frame to decode
           _err = av_read_frame(_fmtCtx, &amp;pkt);
           if (_err &lt; 0) {
               if (_err == AVERROR_EOF) {
                   ans.set(Flags_e::kEndOfFile);
                   _err = 0; // we can safely ignore EOF errors
                   return ans;
               } else {
                   baleOnFail(__PRETTY_FUNCTION__);
               }
           }

           // send (compressed) packets to the decoder until it produces an uncompressed frame
           do {

               // sender
               _err = ::avcodec_send_packet(_codecCtx, &amp;pkt);
               if (_err &lt; 0) {
                   if (_err == AVERROR_EOF) {
                       _err = 0; // EOFs are ok
                       ans.set(Flags_e::kEndOfFile);
                       break;
                   } else {
                       baleOnFail(__PRETTY_FUNCTION__);
                   }
               }

               // receiver
               ret = ::avcodec_receive_frame (_codecCtx, _frame);
               if (ret == AVERROR(EAGAIN)) {
                   continue;
               } else if (ret == AVERROR_EOF) {
                   ans.set(Flags_e::kEndOfFile);
                   break;
               } else if (ret &lt; 0) {
                   _err = ret;
                   baleOnFail(__PRETTY_FUNCTION__);
               } else {
                   ans.set(Flags_e::kGotFrame);
               }

               av_packet_unref (&amp;pkt);

           } while (!ans.test(Flags_e::kGotFrame));        

           //packFrame(); &lt;-- used to copy to client image

           return ans;
       }

    And here’s my override for get_buffer2

           int getVideoBuffer(struct AVCodecContext* ctx, AVFrame* frm) {
           // ensure frame pointers are all null
           if (frm->data[0] || frm->data[1] || frm->data[2] || frm->data[3]){
               ::strncpy (_errMsg, "non-null frame data pointer detected.", AV_ERROR_MAX_STRING_SIZE);
               return -1;
           }

           // get format descriptor, ensure it's valid.
           const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(static_cast<avpixelformat>(frm->format));
           if (!desc) {
               ::strncpy (_errMsg, "Pixel format descriptor not available.", AV_ERROR_MAX_STRING_SIZE);
               return AVERROR(EINVAL);
           }

           // for Video, extended data must point to the same place as data.
           frm->extended_data = frm->data;

           // set the data pointers to point at the Image data.
           int chan = 0;
           IMG* img = _imgs[_toggle];
           // initialize active channels
           for (; chan &lt; 3; ++chan) {
               frm->buf[chan] =  av_buffer_create (
                   static_cast(img->begin(chan)),
                   rv::unsigned_cast<int>(img->size(chan)),
                   Player::freeBufferCallback, // callback does nothing
                   reinterpret_cast(this),
                   0 // i.e. AV_BUFFER_FLAG_READONLY is not set
               );
               frm->linesize[chan] = rv::unsigned_cast<int>(img->stride(chan));
               frm->data[chan] = frm->buf[chan]->data;
           }
           // zero out inactive channels
           for (; chan &lt; AV_NUM_DATA_POINTERS; ++chan) {
               frm->data[chan] = NULL;
               frm->linesize[chan] = 0;
           }
           return 0;
       }
    </int></int></avpixelformat>

    I can reason that the codec needs to keep reference frames in memory and so I’m not really surprised that this isn’t working, but I’ve not been able to figure out how to have it deliver clean decoded frames to client memory. I thought that AVFrame::key_frame would have been a clue, but, after observing its behaviour in gdb, it doesn’t provide a useful trigger for when to allocate AVFrame::bufs from the buffer pool and when they can be initialized to point at client memory.

    Grateful for any help!