Recherche avancée

Médias (3)

Mot : - Tags -/spip

Autres articles (43)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (9429)

  • How can I capture audio AND video simultenaous with ffmpeg from an USB capture device

    5 octobre 2011, par oban

    I'm capturing a video by means of an USB Terratec Grabster AV350 (which is based on the em2860 chip).

    I don't succeed to get the audio when it is played . If I play the captured video with vlc or with ffplay I got only 3 seconds sound and then a silence for the rest of the video ...

    During the capturing I don't get any errors. At the end it indicates the size of the video and audio captured ....

    I'm using the ffmpeg command for this :

    ffmpeg -f alsa -ac 2 -i hw:3 -f video4linux2 -i /dev/video0 -acodec ac3 -ab 128k -vcodec mpeg4 -b 6000k -r 25 test5.avi

    The log is :

    [alsa @ 0x9bcd420]Estimating duration from bitrate, this may be inaccurate
    Input #0, alsa, from 'hw:3' :
    Duration : N/A, start : 69930.998994, bitrate : N/A
    Stream #0.0 : Audio : pcm_s16le, 44100 Hz, 2 channels, s16, 1411 kb/s
    [video4linux2 @ 0x9bf5d30]Estimating duration from bitrate, this may be inaccurate
    Input #1, video4linux2, from '/dev/video0' :
    Duration : N/A, start : 1307111377.654173, bitrate : -2147483 kb/s
    Stream #1.0 : Video : rawvideo, yuyv422, 720x576, -2147483 kb/s, 1000k tbr, 1000k tbn, 1000k tbc
    [ac3 @ 0x9bf9590]No channel layout specified. The encoder will guess the layout, but it might be incorrect.
    Output #0, avi, to 'test5.avi' :
    Metadata :
    ISFT : Lavf52.64.2
    Stream #0.0 : Video : mpeg4, yuv420p, 720x576, q=2-31, 6000 kb/s, 25 tbn, 25 tbc
    Stream #0.1 : Audio : ac3, 44100 Hz, stereo, s16, 128 kb/s
    Stream mapping :
    Stream #1.0 -> #0.0
    Stream #0.0 -> #0.1
    Press [q] to stop encoding
    frame= 1283 fps= 25 q=2.3 Lsize= 38677kB time=51.32 bitrate=6173.9kbits/s
    video:37755kB audio:846kB global headers:0kB muxing overhead 0.198922%

    If I reduce the command for only capturing audio, then the audio file can be played successfully :

    ffmpeg -f alsa -ac 2 -i hw:3,0 -acodec ac3 -ab 128k test5.avi

    [alsa @ 0x8ede420]Estimating duration from bitrate, this may be inaccurate
    Input #0, alsa, from 'hw:3,0' :
    Duration : N/A, start : 70395.998935, bitrate : N/A
    Stream #0.0 : Audio : pcm_s16le, 44100 Hz, 2 channels, s16, 1411 kb/s
    [ac3 @ 0x8eebac0]No channel layout specified. The encoder will guess the layout, but it might be incorrect.
    Output #0, avi, to 'test5.avi' :
    Metadata :
    ISFT : Lavf52.64.2
    Stream #0.0 : Audio : ac3, 44100 Hz, stereo, s16, 128 kb/s
    Stream mapping :
    Stream #0.0 -> #0.0
    Press [q] to stop encoding
    size= 227kB time=13.62 bitrate= 136.8kbits/s
    video:0kB audio:213kB global headers:0kB muxing overhead 6.902375%

    If I run the command for only video capturing then vlc or ffplay can play the video successfully :

    ffmpeg -f video4linux2 -i /dev/video0 -vcodec mpeg4 -b 12000k -r 25 test5.avi

    [video4linux2 @ 0x91d6420]Estimating duration from bitrate, this may be inaccurate
    Input #0, video4linux2, from '/dev/video0' :
    Duration : N/A, start : 1307112044.025687, bitrate : -2147483 kb/s
    Stream #0.0 : Video : rawvideo, yuyv422, 720x576, -2147483 kb/s, 1000k tbr, 1000k tbn, 1000k tbc
    Output #0, avi, to 'test5.avi' :
    Metadata :
    ISFT : Lavf52.64.2
    Stream #0.0 : Video : mpeg4, yuv420p, 720x576, q=2-31, 12000 kb/s, 25 tbn, 25 tbc
    Stream mapping :
    Stream #0.0 -> #0.0
    Press [q] to stop encoding
    frame= 388 fps= 25 q=2.0 Lsize= 12963kB time=15.52 bitrate=6842.5kbits/s
    video:12949kB audio:0kB global headers:0kB muxing overhead 0.114584%

    Strange behaviour I noticed is that when I tried capturing video and audio, I can not capture the audio afterwards any more,
    unless I unplug the AV350 first.

    The G350 is located at card 3 :

    htpc@htpc-01 :/proc/asound/G350/pcm0c$ more info
    card : 3
    device : 0
    subdevice : 0
    stream : CAPTURE
    id : USB Audio
    name : USB Audio
    subname : subdevice #0
    class : 0
    subclass : 0
    subdevices_count : 1
    subdevices_avail : 1

    The OS is a Linux 2.6.38-8-generic with the Ubuntu Natty Narwhal version

    Any help on how to tackle this issue would be great ....

    Thanks !

  • How do I use the Windows version of gstreamer and wireshark to take a .pcap file and extract H.264 from RTP ?

    5 mars 2015, par user1118047

    I have a pcap file containing a capture of RTP with H.264 video and SIP with SDP. I would like to be able to extract the video from the RTP stream and save it to a file. (h264video.mkv or something similar)

    I have started looking at gstreamer as a possible solution for this but I’m having trouble troubleshooting any of the output I receive from the program.

    gst-launch -v     filesrc location=testh264.rtp    
    ! application/x-rtp,media=video,clock-rate=90000,payload=123,encoding-name=H264    
    ! rtph264depay                  
    ! ffdec_h264                    
    ! xvimagesink

    Here is an example of something I’ve tried but I’m not able to get through rtph264depay because the file I’m sending is of invalid format. What can I do to extract the h264 payload from my pcap file for usage with gstreamer/rtph264depay ?

  • how to stream h.264 video with mp3 audio using libavcodec ?

    18 septembre 2012, par dasg

    I read h.264 frames from webcamera and capture audio from microphone. I need to stream live video to ffserver. During debug I read video from ffserver using ffmpeg with following command :

    ffmpeg -i http://127.0.0.1:12345/robot.avi -vcodec copy -acodec copy out.avi

    My video in output file is slightly accelerated. If I add a audio stream it is accelerated several times. Sometimes there is no audio in the output file.

    Here is my code for encoding audio :

    #include "v_audio_encoder.h"

    extern "C" {
    #include <libavcodec></libavcodec>avcodec.h>
    }
    #include <cassert>

    struct VAudioEncoder::Private
    {
       AVCodec *m_codec;
       AVCodecContext *m_context;

       std::vector m_outBuffer;
    };

    VAudioEncoder::VAudioEncoder( int sampleRate, int bitRate )
    {
       d = new Private( );
       d->m_codec = avcodec_find_encoder( CODEC_ID_MP3 );
       assert( d->m_codec );
       d->m_context = avcodec_alloc_context3( d->m_codec );

       // put sample parameters
       d->m_context->channels = 2;
       d->m_context->bit_rate = bitRate;
       d->m_context->sample_rate = sampleRate;
       d->m_context->sample_fmt = AV_SAMPLE_FMT_S16;
       strcpy( d->m_context->codec_name, "libmp3lame" );

       // open it
       int res = avcodec_open2( d->m_context, d->m_codec, 0 );
       assert( res >= 0 );

       d->m_outBuffer.resize( d->m_context->frame_size );
    }

    VAudioEncoder::~VAudioEncoder( )
    {
       avcodec_close( d->m_context );
       av_free( d->m_context );
       delete d;
    }

    void VAudioEncoder::encode( const std::vector&amp; samples, std::vector&amp; outbuf )
    {
       assert( (int)samples.size( ) == d->m_context->frame_size );

       int outSize = avcodec_encode_audio( d->m_context, d->m_outBuffer.data( ),
                                           d->m_outBuffer.size( ), reinterpret_cast<const>( samples.data( ) ) );
       if( outSize ) {
           outbuf.resize( outSize );
           memcpy( outbuf.data( ), d->m_outBuffer.data( ), outSize );
       }
       else
           outbuf.clear( );
    }

    int VAudioEncoder::getFrameSize( ) const
    {
       return d->m_context->frame_size;
    }
    </const></cassert>

    Here is my code for streaming video :

    #include "v_out_video_stream.h"

    extern "C" {
    #include <libavformat></libavformat>avformat.h>
    #include <libavutil></libavutil>opt.h>
    #include <libavutil></libavutil>avstring.h>
    #include <libavformat></libavformat>avio.h>
    }

    #include <stdexcept>
    #include <cassert>

    struct VStatticRegistrar
    {
       VStatticRegistrar( )
       {
           av_register_all( );
           avformat_network_init( );
       }
    };

    VStatticRegistrar __registrar;

    struct VOutVideoStream::Private
    {
       AVFormatContext * m_context;
       int m_videoStreamIndex;
       int m_audioStreamIndex;

       int m_videoBitrate;
       int m_width;
       int m_height;
       int m_fps;
       int m_bitrate;

       bool m_waitKeyFrame;
    };

    VOutVideoStream::VOutVideoStream( int width, int height, int fps, int bitrate )
    {
       d = new Private( );
       d->m_width = width;
       d->m_height = height;
       d->m_fps = fps;
       d->m_context = 0;
       d->m_videoStreamIndex = -1;
       d->m_audioStreamIndex = -1;
       d->m_bitrate = bitrate;
       d->m_waitKeyFrame = true;
    }

    bool VOutVideoStream::connectToServer( const std::string&amp; uri )
    {
       assert( ! d->m_context );

       // initalize the AV context
       d->m_context = avformat_alloc_context();
       if( !d->m_context )
           return false;
       // get the output format
       d->m_context->oformat = av_guess_format( "ffm", NULL, NULL );
       if( ! d->m_context->oformat )
           return false;

       strcpy( d->m_context->filename, uri.c_str( ) );

       // add an H.264 stream
       AVStream *stream = avformat_new_stream( d->m_context, NULL );
       if ( ! stream )
           return false;
       // initalize codec
       AVCodecContext* codec = stream->codec;
       if( d->m_context->oformat->flags &amp; AVFMT_GLOBALHEADER )
           codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       codec->codec_id = CODEC_ID_H264;
       codec->codec_type = AVMEDIA_TYPE_VIDEO;
       strcpy( codec->codec_name, "libx264" );
    //    codec->codec_tag = ( unsigned(&#39;4&#39;) &lt;&lt; 24 ) + (unsigned(&#39;6&#39;) &lt;&lt; 16 ) + ( unsigned(&#39;2&#39;) &lt;&lt; 8 ) + &#39;H&#39;;
       codec->width = d->m_width;
       codec->height = d->m_height;
       codec->time_base.den = d->m_fps;
       codec->time_base.num = 1;
       codec->bit_rate = d->m_bitrate;
       d->m_videoStreamIndex = stream->index;

       // add an MP3 stream
       stream = avformat_new_stream( d->m_context, NULL );
       if ( ! stream )
           return false;
       // initalize codec
       codec = stream->codec;
       if( d->m_context->oformat->flags &amp; AVFMT_GLOBALHEADER )
           codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       codec->codec_id = CODEC_ID_MP3;
       codec->codec_type = AVMEDIA_TYPE_AUDIO;
       strcpy( codec->codec_name, "libmp3lame" );
       codec->sample_fmt = AV_SAMPLE_FMT_S16;
       codec->channels = 2;
       codec->bit_rate = 64000;
       codec->sample_rate = 44100;
       d->m_audioStreamIndex = stream->index;

       // try to open the stream
       if( avio_open( &amp;d->m_context->pb, d->m_context->filename, AVIO_FLAG_WRITE ) &lt; 0 )
            return false;

       // write the header
       return avformat_write_header( d->m_context, NULL ) == 0;
    }

    void VOutVideoStream::disconnect( )
    {
       assert( d->m_context );

       avio_close( d->m_context->pb );
       avformat_free_context( d->m_context );
       d->m_context = 0;
    }

    VOutVideoStream::~VOutVideoStream( )
    {
       if( d->m_context )
           disconnect( );
       delete d;
    }

    int VOutVideoStream::getVopType( const std::vector&amp; image )
    {
       if( image.size( ) &lt; 6 )
           return -1;
       unsigned char *b = (unsigned char*)image.data( );

       // Verify NAL marker
       if( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] ) {
           ++b;
           if ( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] )
               return -1;
       }

       b += 3;

       // Verify VOP id
       if( 0xb6 == *b ) {
           ++b;
           return ( *b &amp; 0xc0 ) >> 6;
       }

       switch( *b ) {
       case 0x65: return 0;
       case 0x61: return 1;
       case 0x01: return 2;
       }

       return -1;
    }

    bool VOutVideoStream::sendVideoFrame( std::vector&amp; image )
    {
       // Init packet
       AVPacket pkt;
       av_init_packet( &amp;pkt );
       pkt.flags |= ( 0 >= getVopType( image ) ) ? AV_PKT_FLAG_KEY : 0;

       // Wait for key frame
       if ( d->m_waitKeyFrame ) {
           if( pkt.flags &amp; AV_PKT_FLAG_KEY )
               d->m_waitKeyFrame = false;
           else
               return true;
       }

       pkt.stream_index = d->m_videoStreamIndex;
       pkt.data = image.data( );
       pkt.size = image.size( );
       pkt.pts = pkt.dts = AV_NOPTS_VALUE;

       return av_write_frame( d->m_context, &amp;pkt ) >= 0;
    }

    bool VOutVideoStream::sendAudioFrame( std::vector&amp; audio )
    {
       // Init packet
       AVPacket pkt;
       av_init_packet( &amp;pkt );
       pkt.stream_index = d->m_audioStreamIndex;
       pkt.data = audio.data( );
       pkt.size = audio.size( );
       pkt.pts = pkt.dts = AV_NOPTS_VALUE;

       return av_write_frame( d->m_context, &amp;pkt ) >= 0;
    }
    </cassert></stdexcept>

    Here is how I use it :

    BOOST_AUTO_TEST_CASE(testSendingVideo)
    {
       const int framesToGrab = 90000;

       VOutVideoStream stream( VIDEO_WIDTH, VIDEO_HEIGHT, FPS, VIDEO_BITRATE );
       if( stream.connectToServer( URI ) ) {
           VAudioEncoder audioEncoder( AUDIO_SAMPLE_RATE, AUDIO_BIT_RATE );
           VAudioCapture microphone( MICROPHONE_NAME, AUDIO_SAMPLE_RATE, audioEncoder.getFrameSize( ) );

           VLogitecCamera camera( VIDEO_WIDTH, VIDEO_HEIGHT );
           BOOST_REQUIRE( camera.open( CAMERA_PORT ) );
           BOOST_REQUIRE( camera.startCapturing( ) );

           std::vector image, encodedAudio;
           std::vector voice;
           boost::system_time startTime;
           int delta;
           for( int i = 0; i &lt; framesToGrab; ++i ) {
               startTime = boost::posix_time::microsec_clock::universal_time( );

               BOOST_REQUIRE( camera.read( image ) );
               BOOST_REQUIRE( microphone.read( voice ) );
               audioEncoder.encode( voice, encodedAudio );

               BOOST_REQUIRE( stream.sendVideoFrame( image ) );
               BOOST_REQUIRE( stream.sendAudioFrame( encodedAudio ) );

               delta = ( boost::posix_time::microsec_clock::universal_time( ) - startTime ).total_milliseconds( );
               if( delta &lt; 1000 / FPS )
                   boost::thread::sleep( startTime + boost::posix_time::milliseconds( 1000 / FPS - delta ) );
           }

           BOOST_REQUIRE( camera.stopCapturing( ) );
           BOOST_REQUIRE( camera.close( ) );
       }
       else
           std::cout &lt;&lt; "failed to connect to server" &lt;&lt; std::endl;
    }

    I think my problem is in PTS and DTS. Can anyone help me ?