Recherche avancée

Médias (3)

Mot : - Tags -/Valkaama

Autres articles (49)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Keeping control of your media in your hands

    13 avril 2011, par

    The vocabulary used on this site and around MediaSPIP in general, aims to avoid reference to Web 2.0 and the companies that profit from media-sharing.
    While using MediaSPIP, you are invited to avoid using words like "Brand", "Cloud" and "Market".
    MediaSPIP is designed to facilitate the sharing of creative media online, while allowing authors to retain complete control of their work.
    MediaSPIP aims to be accessible to as many people as possible and development is based on expanding the (...)

Sur d’autres sites (8313)

  • ffmpeg fails to connect a second camera when run using python thread

    1er juin 2023, par LemonJumps

    I'm starting ffmpeg dshow -> rawvideo over pipe in python thread, see code below

    


        def run(self):
        proc = subprocess.Popen(
            [FFMPEG_DIR + "ffmpeg", "-hide_banner", "-f", "dshow", 
                "-video_size", str(self.resolution[0]) + "x" + str(self.resolution[1]), # select resolution
                "-framerate", str(self.framerate), # select framerate
                "-i", "video=" + self.device, # select device
                "-f", "rawvideo", "-pix_fmt", "bgr24", "pipe:"], # output to pipe
            stdout = subprocess.PIPE)
        
        buf = b''
        
        while True:
            outs = proc.stdout.read(self.resolution[0] * self.resolution[1] *3)
            buf += outs
            
            if len(buf) >= (self.resolution[0] * self.resolution[1] *3):
                with self.lock:
                    self.frame = buf[0:self.resolution[0] * self.resolution[1] *3] 
                    self.cnt += 1
                    
                buf = b''
                
            if self.endEvent.isSet():
                proc.kill()
                return


    


    I am using alternative name as camera name, both cameras are the same model.
both devices name and alternative name :

    


    ('ACR010 USB Webcam', '@device_pnp_\\\\?\\usb#vid_0c45&pid_636a&mi_00#8&83c432e&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\\global')

    


    ('ACR010 USB Webcam', '@device_pnp_\\\\?\\usb#vid_0c45&pid_636a&mi_00#8&34de11d9&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\\global')

    


    when I run this code for each camera individually it works flawlessly.
But when I run this for both cameras at the same time, the code results this error :
enter image description here

    


    The "Could not run graph" sounds like ffmpeg can't open multiple devices, or isn't allowed by windows, or maybe there's a token that gets reused because it's all within a single process ?

    


    What could be really causing this error and how can I mitigate it ?

    


  • Ip camera using ffmpeg drawing on screen

    6 mars 2014, par user3177342

    I'm using ffmpeg 1.2 to take video from ip camera.I make it draw on the screen, so I wonder if there is some event mechanism to to know if it is time to call av_read_frame ?
    If I read frame not so frequent as the camera gives frames i get segmentation fault = on some malloc functions inside ffmpeg routines(video_get_buffer)

    I also get segmentation fault just when drawing on screen.

    In Render function call every 0 miliseconds

    void BasicGLPane::DrawNextFrame()
    {
    int f=1;
    while(av_read_frame(pFormatCtx, &packet)>=0)
       {
           // Is this a packet from the video stream?
           if(packet.stream_index==videoStream)
           {


               // Decode video frame
               avcodec_decode_video2(pCodecCtx, pFrame, &FrameFinished,
                                     &packet);

               // Did we get a video frame?
               if(FrameFinished)
               {
                   f++;
                   this->fram->Clear();
                  // if (pFrame->pict_type == AV_PICTURE_TYPE_I) wxMessageBox("I cadr");
                   if (pFrame->pict_type != AV_PICTURE_TYPE_I)
                   printMVMatrix(f, pFrame, pCodecCtx);
                   pFrameRGB->linesize[0]= pCodecCtx->width*3; // in case of rgb4  one plane

                   sws_scale(swsContext, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
                   //glGenTextures(1, &VideoTexture);
                   if ((*current_Vtex)==VideoTexture) current_Vtex = &VideoTexture2;else current_Vtex = &VideoTexture;
                   glBindTexture(GL_TEXTURE_2D, (*current_Vtex));
                   glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
                   glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
                   glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
                   glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, pCodecCtx->width, pCodecCtx->height, GL_RGB, GL_UNSIGNED_BYTE, pFrameRGB->data[0]);
                   //glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, pCodecCtx->width, pCodecCtx->height, 0, GL_RGB, GL_UNSIGNED_BYTE, pFrameRGB->data[0]);
                   //glDeleteTextures(1, &VideoTexture);
                   GLenum err;
                   while ((err = glGetError()) != GL_NO_ERROR)
                   {
                       cerr << "OpenGL error: " << err << endl;
                   }
                 //  av_free(buffer);
               }
           }

           // Free the packet that was allocated by av_read_frame
           av_free_packet(&packet);
           if (f>1) break;
       }

    //av_free(pFrameRGB);
    }

    The picture I get on the screen is strange (green quads and red lines are motion vectors of those quads)

    http://i.stack.imgur.com/9HJ9t.png

  • H.264 video file size from camera is much bigger than x264 output

    10 août 2020, par Lawrence song

    I have a UVC camera which supports h264 protocol. we can see the h264 listed below when we list all formats supported.

    


    msm8909:/data # ./ffmpeg -f v4l2 -list_formats all -i /dev/video1
ffmpeg version N-53546-g5eb4405fc5-static https://johnvansickle.com/ffmpeg/  Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 6.3.0 (Debian 6.3.0-18+deb9u1) 20170516
  configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libfribidi --enable-libass --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libxml2 --enable-libxvid --enable-libzimg
  libavutil      56. 56.100 / 56. 56.100
  libavcodec     58. 97.100 / 58. 97.100
  libavformat    58. 49.100 / 58. 49.100
  libavdevice    58. 11.101 / 58. 11.101
  libavfilter     7. 87.100 /  7. 87.100
  libswscale      5.  8.100 /  5.  8.100
  libswresample   3.  8.100 /  3.  8.100
  libpostproc    55.  8.100 / 55.  8.100
[video4linux2,v4l2 @ 0x4649140] Compressed:        h264 :                H.264 : 1920x1080 1280x720 640x480 320x240
[video4linux2,v4l2 @ 0x4649140] Compressed:       mjpeg :                MJPEG : 1920x1080 1280x720 640x480 320x240


    


    I am running the ffmpeg cmd to record UVC camera video to local device.

    


    ffmpeg -f v4l2 -input_format h264 -framerate 30 -video_size 1280*720 -i /dev/video1 -c copy /sdcard/Movies/output.mkv


    


    The video size is way bigger than running the command below :

    


    ffmpeg -f v4l2 -input_format mjpeg -framerate 30 -video_size 1280*720 -i /dev/video1 -c:v libx264 -vf format=yuv420p /sdcard/Movies/output.mp4


    


    I assume the camera already supports h264 protocol. Thus I don't need to re-encode to 264 formats. However, the video size does not look like an H264 encoded video.