Recherche avancée

Médias (91)

Autres articles (103)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • MediaSPIP Core : La Configuration

    9 novembre 2010, par

    MediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
    Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (9727)

  • How to read data from subprocess pipe ffmpeg without block in line when rtsp is disconnected

    22 août 2024, par Jester48

    I have some problems with the ffmpeg subprocess in python where I open an RTSP stream.
One of them is the long time of reading a frame from the pipe, I noticed that reading one frame takes about 250ms -> most of this time is the select.select() line which can take just that long. This makes opening the stream above 4FPS problematic. When I do not use the select.select function, the reading speed is normal, but when the connection to RTSP streams is lost, the program gets stuck in the self.pipe.stdout.read() function and does not exit from it. Is it possible to protect yourself in case of missing data in pipe.stdout.read() without losing frame reading speed as in the case of select.select() ?

    


    class RTSPReceiver(threading.Thread):
    def __init__(self):
        threading.Thread.__init__(self)
        self.ffmpeg_cmd = ['ffmpeg','-loglevel','quiet','-rtsp_transport' ,'tcp','-nostdin','-i',f'rtsp://{config("LOGIN")}:{config("PASS")}@{config("HOST")}/stream=0','-fflags','nobuffer','-flags','low_delay','-map','0:0','-r',f'{config("RTSP_FPS")}','-f','rawvideo','-pix_fmt','bgr24','-']
        self.img_w = 2688
        self.img_h = 1520
        self.image = None
        self.pipe = subprocess.Popen(self.ffmpeg_cmd, stdout=subprocess.PIPE)

    def reconnect(self) -> None:
        if self.pipe:
            self.pipe.terminate()
            self.pipe.kill()
            self.pipe.wait()

    def run(self) -> None:
        self.connect()
        while True:
            try:
                ready, _, _ = select.select([self.pipe.stdout], [], [], 15.0)
                if ready:
                    raw_image = self.pipe.stdout.read(self.img_w*self.img_h*3)
                    if raw_image:
                        with self.lock:
                            self.image = np.frombuffer(raw_image, dtype=np.uint8).reshape(self.img_h, self.img_w, 3)
                else:
                    self.reconnect()
            except Exception as e:
                self.connect()


    


  • how to decode wmp3 video using libavcodec ? (ffmpeg)

    13 mars 2012, par Renan Elias

    I'm developing an application that reads a live tv stream from the internet and I need to play it on my ipad application.

    I've compiled the ffmpeg into my ipad application to use the libavcodec lib, but I've not been able to use it in this lib...

    I know how to get the stream packets, read if it is an audio or video packet, but I don't know how to use the lib to convert the original packet codecs to h264 and mp3 codec...

    I need to convert an stream packet wmv3 to h264 and save it on a file.

    My code is below...

    AVFormatContext* pFormatCtx = avformat_alloc_context();
    int             i, videoStream, audioStream;
    AVCodecContext  *pCodecCtx;
    AVCodecContext *aCodecCtx;
    AVCodec         *pCodec;
    AVCodec         *aCodec;
    AVFrame         *pFrame;
    AVFrame         *pFrameRGB;
    AVPacket        packet;
    int             frameFinished;
    int             numBytes;
    uint8_t         *buffer;

    static struct SwsContext *img_convert_ctx;


    // Register all formats and codecs
    av_register_all();
    avcodec_register_all();
    avformat_network_init();

    // Open video file
    if(avformat_open_input(&pFormatCtx, [objURL cStringUsingEncoding:NSASCIIStringEncoding] ,NULL,NULL) != 0){
       return -1;
    }

    // Retrieve stream information
    if(avformat_find_stream_info(pFormatCtx, NULL)<0)
       return -1; // Couldn't find stream information

    // Dump information about file onto standard error
    av_dump_format(pFormatCtx, 0, [objURL cStringUsingEncoding:NSASCIIStringEncoding], 0);

    // Find the first video stream
    videoStream=-1;
    audioStream=-1;
    for(i=0; i < pFormatCtx->nb_streams; i++) {
       if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO && videoStream < 0) {
           videoStream=i;
       }
       if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO && audioStream < 0) {
           audioStream=i;
       }
    }
    if(videoStream==-1)
       return -1; // Didn't find a video stream
    if(audioStream==-1)
       return -1;

    // Get a pointer to the codec context for the video stream
    pCodecCtx=pFormatCtx->streams[videoStream]->codec;

    // Find the decoder for the video stream
    pCodec=avcodec_find_decoder(pCodecCtx->codec_id);
    if(pCodec==NULL) {
       fprintf(stderr, "Unsupported codec!\n");
       return -1; // Codec not found
    }

    // Open codec
    if(avcodec_open2(pCodecCtx, pCodec, NULL)<0)
       return -1; // Could not open codec

    // Get a pointer to the codec context for the audio stream
    aCodecCtx=pFormatCtx->streams[audioStream]->codec;

    // Find the decoder for the audio stream
    aCodec = avcodec_find_decoder(aCodecCtx->codec_id);
    if(!aCodec) {
       fprintf(stderr, "Unsupported codec!\n");
       return -1; // Codec not found
    }

    // Open codec
    if(avcodec_open2(aCodecCtx, aCodec, NULL)<0)
       return -1; // Could not open codec

    // Allocate video frame
    pFrame=avcodec_alloc_frame();

    // Allocate an AVFrame structure
    pFrameRGB=avcodec_alloc_frame();
    if(pFrameRGB==NULL)
       return -1;

    // Determine required buffer size and allocate buffer
    numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
                               pCodecCtx->height);
    buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

    // Assign appropriate parts of buffer to image planes in pFrameRGB
    // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
    // of AVPicture
    avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
                  pCodecCtx->width, pCodecCtx->height);

    // Read frames and save first five frames to disk
    i=0;
    while(av_read_frame(pFormatCtx, &packet)>=0) {
       // Is this a packet from the video stream?
       if(packet.stream_index==audioStream) {
           NSLog(@"Audio.. i'll solve the video first...");
       } else if(packet.stream_index==videoStream) {

           /// HOW CONVERT THIS VIDEO PACKET TO H264 and save on a file? :(
       }

       // Free the packet that was allocated by av_read_frame
       av_free_packet(&packet);
    }

    // Free the RGB image
    av_free(buffer);
    av_free(pFrameRGB);

    // Free the YUV frame
    av_free(pFrame);

    // Close the codec
    avcodec_close(pCodecCtx);

    // Close the video file
    avformat_close_input(&pFormatCtx);

    return 0;
  • ffmpeg live stream latency

    22 août 2014, par Alex Fu

    I’m currently working on live streaming video from device A (source) to device B (destination) directly via local WiFi network.

    I’ve built FFMPEG to work on the Android platform and I have been able to stream video from A -> B successfully at the expense of latency (takes about 20 seconds for a movement or change to appear on screen ; as if the video was 20 seconds behind actual events).

    Initial start up is around 4 seconds. I’ve been able to trim that initial start up time down by lowering probesize and max_analyze_duration but the 20 second delay is still there.

    I’ve sprinkled some timing events around the code to try an figure out where the most time is being spent...

    • naInit : 0.24575 sec
    • naSetup : 0.043705 sec

    The first video frame isn’t obtained until 0.035342 sec after the decodeAndRender function is called. Subsequent decoding times can be illustrated here : enter image description here http://jsfiddle.net/uff0jdf7/1/ (interactive graph)

    From all the timing data i’ve recorded, nothing really jumps out at me unless I’m doing the timing wrong. Some have suggested that I am buffering too much data, however, as far as I can tell, I’m only buffering an image at a time. Is this too much ?

    Also, the source video that’s coming in is in the format of P264 ; it’s a custom implementation of H264 apparently.

    jint naSetup(JNIEnv *pEnv, jobject pObj, int pWidth, int pHeight) {
     width = pWidth;
     height = pHeight;

     //create a bitmap as the buffer for frameRGBA
     bitmap = createBitmap(pEnv, pWidth, pHeight);
     if (AndroidBitmap_lockPixels(pEnv, bitmap, &pixel_buffer) < 0) {
       LOGE("Could not lock bitmap pixels");
       return -1;
     }

     //get the scaling context
     sws_ctx = sws_getContext(codecCtx->width, codecCtx->height, codecCtx->pix_fmt,
         pWidth, pHeight, AV_PIX_FMT_RGBA, SWS_BILINEAR, NULL, NULL, NULL);

     // Assign appropriate parts of bitmap to image planes in pFrameRGBA
     // Note that pFrameRGBA is an AVFrame, but AVFrame is a superset
     // of AVPicture
     av_image_fill_arrays(frameRGBA->data, frameRGBA->linesize, pixel_buffer, AV_PIX_FMT_RGBA, pWidth, pHeight, 1);
     return 0;
    }

    void decodeAndRender(JNIEnv *pEnv) {
     ANativeWindow_Buffer windowBuffer;
     AVPacket packet;
     AVPacket outputPacket;
     int frame_count = 0;
     int got_frame;

     while (!stop && av_read_frame(formatCtx, &packet) >= 0) {
       // Is this a packet from the video stream?
       if (packet.stream_index == video_stream_index) {

         // Decode video frame
         avcodec_decode_video2(codecCtx, decodedFrame, &got_frame, &packet);

         // Did we get a video frame?
         if (got_frame) {
           // Convert the image from its native format to RGBA
           sws_scale(sws_ctx, (uint8_t const * const *) decodedFrame->data,
               decodedFrame->linesize, 0, codecCtx->height, frameRGBA->data,
               frameRGBA->linesize);

           // lock the window buffer
           if (ANativeWindow_lock(window, &windowBuffer, NULL) < 0) {
             LOGE("Cannot lock window");
           } else {
             // draw the frame on buffer
             int h;
             for (h = 0; h < height; h++) {
               memcpy(windowBuffer.bits + h * windowBuffer.stride * 4,
                      pixel_buffer + h * frameRGBA->linesize[0],
                      width * 4);
             }
             // unlock the window buffer and post it to display
             ANativeWindow_unlockAndPost(window);

             // count number of frames
             ++frame_count;
           }
         }
       }

       // Free the packet that was allocated by av_read_frame
       av_free_packet(&packet);
     }

     LOGI("Total # of frames decoded and rendered %d", frame_count);
    }