Recherche avancée

Médias (91)

Autres articles (52)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (9673)

  • Add function addFileToMETAINF for issue #32

    15 septembre 2015, par Grandt
    Add function addFileToMETAINF for issue #32

    Rev. 4.0.2 - 2015-09-15
    * Changed : Added function addFileToMETAINF to enable users to add the
    apple ibooks metadata.
    * $book->addFileToMETAINF("com.apple.ibooks.display-options.xml",
    $xmldata) ;

  • FFMPEG error with avformat_open_input returning -135

    28 avril 2015, par LawfulEvil

    I have a DLL one of my applications uses to receive video from RTSP cameras. Under the hood, the DLL uses FFMPEG libs from this release zip :

    ffmpeg-20141022-git-6dc99fd-win64-shared.7z

    We have a wide variety of cameras in house and most of them work just fine. However, on one particular Pelco Model Number : IXE20DN-OCP, I am unable to connect. I tested the camera and rtsp connection string on VLC and it connects to the camera just fine.

    I found the connection string here : http://www.ispyconnect.com/man.aspx?n=Pelco

    rtsp://IPADDRESS:554/1/stream1

    Oddly, even if I leave the port off of VLC, it connects, so I’m guessing its the default RTSP port or that VLC tries a variety of things based on your input.

    In any case, when I attempt to connect, I get an error from av_format_open_input. It returns a code of -135. When I looked in the error code list I didn’t see that listed. For good measure, I printed out all the errors in error.h just to see what their values were.

    DumpErrorCodes - Error Code : AVERROR_BSF_NOT_FOUND = -1179861752
    DumpErrorCodes - Error Code : AVERROR_BUG = -558323010
    DumpErrorCodes - Error Code : AVERROR_BUFFER_TOO_SMALL = -1397118274
    DumpErrorCodes - Error Code : AVERROR_DECODER_NOT_FOUND = -1128613112
    DumpErrorCodes - Error Code : AVERROR_DEMUXER_NOT_FOUND = -1296385272
    DumpErrorCodes - Error Code : AVERROR_ENCODER_NOT_FOUND = -1129203192
    DumpErrorCodes - Error Code : AVERROR_EOF = -541478725
    DumpErrorCodes - Error Code : AVERROR_EXIT = -1414092869
    DumpErrorCodes - Error Code : AVERROR_EXTERNAL = -542398533
    DumpErrorCodes - Error Code : AVERROR_FILTER_NOT_FOUND = -1279870712
    DumpErrorCodes - Error Code : AVERROR_INVALIDDATA = -1094995529
    DumpErrorCodes - Error Code : AVERROR_MUXER_NOT_FOUND = -1481985528
    DumpErrorCodes - Error Code : AVERROR_OPTION_NOT_FOUND = -1414549496
    DumpErrorCodes - Error Code : AVERROR_PATCHWELCOME = -1163346256
    DumpErrorCodes - Error Code : AVERROR_PROTOCOL_NOT_FOUND = -1330794744
    DumpErrorCodes - Error Code : AVERROR_STREAM_NOT_FOUND = -1381258232
    DumpErrorCodes - Error Code : AVERROR_BUG2 = -541545794
    DumpErrorCodes - Error Code : AVERROR_UNKNOWN = -1313558101
    DumpErrorCodes - Error Code : AVERROR_EXPERIMENTAL = -733130664
    DumpErrorCodes - Error Code : AVERROR_INPUT_CHANGED = -1668179713
    DumpErrorCodes - Error Code : AVERROR_OUTPUT_CHANGED = -1668179714
    DumpErrorCodes - Error Code : AVERROR_HTTP_BAD_REQUEST = -808465656
    DumpErrorCodes - Error Code : AVERROR_HTTP_UNAUTHORIZED = -825242872
    DumpErrorCodes - Error Code : AVERROR_HTTP_FORBIDDEN = -858797304
    DumpErrorCodes - Error Code : AVERROR_HTTP_NOT_FOUND = -875574520
    DumpErrorCodes - Error Code : AVERROR_HTTP_OTHER_4XX = -1482175736
    DumpErrorCodes - Error Code : AVERROR_HTTP_SERVER_ERROR = -1482175992

    Nothing even close to -135. I did find this error, sort of on stack overflow, here runtime error when linking ffmpeg libraries in qt creator where the author claims it is a DLL loading problem error. I’m not sure what led him to think that, but I followed the advice and used the dependency walker (http://www.dependencywalker.com/) to checkout what dependencies it thought my DLL needed. It listed a few, but they were already provided in my install package.

    To make sure it was picking them up, I manually removed them from the install and observed a radical change in program behavior(that being my DLL didn’t load and start to run at all).

    So, I’ve got a bit of init code :

    void FfmpegInitialize()
    {
    av_lockmgr_register(&LockManagerCb);
    av_register_all();
    LOG_DEBUG0("av_register_all returned\n");
    }

    Then I’ve got my main open connection routine ...

    int RTSPConnect(const char *URL, int width, int height, frameReceived callbackFunction)
    {

       int errCode =0;
       if ((errCode = avformat_network_init()) != 0)
       {
           LOG_ERROR1("avformat_network_init returned error code %d\n", errCode);  
       }
       LOG_DEBUG0("avformat_network_init returned\n");
       //Allocate space and setup the the object to be used for storing all info needed for this connection
       fContextReadFrame = avformat_alloc_context(); // free'd in the Close method

       if (fContextReadFrame == 0)
       {
           LOG_ERROR1("Unable to set rtsp_transport options.   Error code = %d\n", errCode);
           return FFMPEG_OPTION_SET_FAILURE;
       }

       LOG_DEBUG1("avformat_alloc_context returned %p\n", fContextReadFrame);

       AVDictionary *opts = 0;
       if ((errCode = av_dict_set(&opts, "rtsp_transport", "tcp", 0)) < 0)
       {
           LOG_ERROR1("Unable to set rtsp_transport options.   Error code = %d\n", errCode);
           return FFMPEG_OPTION_SET_FAILURE;
       }
       LOG_DEBUG1("av_dict_set returned %d\n", errCode);

       //open rtsp
       DumpErrorCodes();
       if ((errCode = avformat_open_input(&fContextReadFrame, URL, NULL, &opts)) < 0)
       {
           LOG_ERROR2("Unable to open avFormat RF inputs.   URL = %s, and Error code = %d\n", URL, errCode);      
           LOG_ERROR2("Error Code %d = %s\n", errCode, errMsg(errCode));      
           // NOTE context is free'd on failure.
           return FFMPEG_FORMAT_OPEN_FAILURE;
       }
    ...

    To be sure I didn’t misunderstand the error code I printed the error message from ffmpeg but the error isn’t found and my canned error message is returned instead.

    My next step was going to be hooking up wireshark on my connection attempt and on the VLC connection attempt and trying to figure out what differences(if any) are causing the problem and what I can do to ffmpeg to make it work. As I said, I’ve got a dozen other cameras in house that use RTSP and they work with my DLL. Some utilize usernames/passwords/etc as well(so I know that isn’t the problem).

    Also, my run logs :

    FfmpegInitialize - av_register_all returned
    Open - Open called.  Pointers valid, passing control.
    Rtsp::RtspInterface::Open - Rtsp::RtspInterface::Open called
    Rtsp::RtspInterface::Open - VideoSourceString(35) = rtsp://192.168.14.60:554/1/stream1
    Rtsp::RtspInterface::Open - Base URL = (192.168.14.60:554/1/stream1)
    Rtsp::RtspInterface::Open - Attempting to open (rtsp://192.168.14.60:554/1/stream1) for WxH(320x240) video
    RTSPSetFormatH264 - RTSPSetFormatH264
    RTSPConnect - Called
    LockManagerCb - LockManagerCb invoked for op 1
    LockManagerCb - LockManagerCb invoked for op 2
    RTSPConnect - avformat_network_init returned
    RTSPConnect - avformat_alloc_context returned 019E6000
    RTSPConnect - av_dict_set returned 0
    DumpErrorCodes - Error Code : AVERROR_BSF_NOT_FOUND = -1179861752
    ...
    DumpErrorCodes - Error Code : AVERROR_HTTP_SERVER_ERROR = -1482175992
    RTSPConnect - Unable to open avFormat RF inputs.   URL = rtsp://192.168.14.60:554/1/stream1, and Error code = -135
    RTSPConnect - Error Code -135 = No Error Message Available

    I’m going to move forward with wireshark but would like to know the origin of the -135 error code from ffmpeg. When I look at the code if ’ret’ is getting set to -135, it must be happening as a result of the return code from a helper method and not directly in the avformat_open_input method.

    https://www.ffmpeg.org/doxygen/2.5/libavformat_2utils_8c_source.html#l00398

    After upgrading to the latest daily ffmpeg build, I get data on wireshark. Real Time Streaming Protocol :

    Request: SETUP rtsp://192.168.14.60/stream1/track1 RTSP/1.0\r\n
    Method: SETUP
    URL: rtsp://192.168.14.60/stream1/track1
    Transport: RTP/AVP/TCP;unicast;interleaved=0-1
    CSeq: 3\r\n
    User-Agent: Lavf56.31.100\r\n
    \r\n

    The response to that is the first ’error’ that I can detect in the initiation.

    Response: RTSP/1.0 461 Unsupported Transport\r\n
    Status: 461
    CSeq: 3\r\n
    Date: Sun, Jan 04 1970 16:03:05 GMT\r\n
    \r\n

    I’m going to guess that... it means the transport we selected was unsupported. I quick check of the code reveals I picked ’tcp’. Looking through the reply to the DESCRIBE command, it appears :

    Media Protocol: RTP/AVP

    Further, when SETUP is issued by ffmpeg, it specifies :

    Transport: RTP/AVP/TCP;unicast;interleaved=0-1

    I’m going to try, on failure here to pick another transport type and see how that works. Still don’t know where the -135 comes from.

  • using libav instead of ffmpeg

    21 janvier 2015, par n00bie

    I want to streaming video over http, i am using ogg(theora + vorbis), now i have sender and receiver, and i can run them using command line :

    Sender :

    ffmpeg -f video4linux2 -s 320x240 -i /dev/mycam -codec:v libtheora -qscale:v 5 -f ogg http://127.0.0.1:8080

    Receiver :

    sudo gst-launch-0.10 tcpserversrc port = 8080 ! oggdemux ! theoradec ! autovideosink

    Now, sender sends both audio and video, but receiver plays only video.

    It works perfect, but now i want not to use ffmpeg and use only libav* instead.

    Here’s my class for streaming :

    class VCORE_LIBRARY_EXPORT VVideoWriter : private boost::noncopyable
    {
    public:
       VVideoWriter( );
       ~VVideoWriter( );

       bool openFile( const std::string& name,
                      int fps, int videoBitrate, int width, int height,
                      int audioSampleRate, bool stereo, int audioBitrate );
       void close( );

       bool writeVideoFrame( const uint8_t* image, int64_t timestamp );
       bool writeAudioFrame( const int16_t* data, int64_t timestamp  );

       int audioFrameSize( ) const;

    private:
       AVFrame *m_videoFrame;
       AVFrame *m_audioFrame;

       AVFormatContext *m_context;
       AVStream *m_videoStream;
       AVStream *m_audioStream;

       int64_t m_startTime;
    };

    Initialization :

    bool VVideoWriter::openFile( const std::string& name,
                                int fps, int videoBitrate, int width, int height,
                                int audioSampleRate, bool stereo, int audioBitrate )
    {
            if( ! m_context )
            {
               // initalize the AV context
               m_context = avformat_alloc_context( );
               assert( m_context );

               // get the output format
               m_context->oformat = av_guess_format( "ogg", name.c_str( ), nullptr );
               if( m_context->oformat )
               {
                   strcpy( m_context->filename, name.c_str( ) );

                   auto codecID = AV_CODEC_ID_THEORA;
                   auto codec = avcodec_find_encoder( codecID );

                   if( codec )
                   {
                       m_videoStream = avformat_new_stream( m_context, codec );
                       assert( m_videoStream );

                       // initalize codec
                       auto codecContext = m_videoStream->codec;
                       bool globalHeader = m_context->oformat->flags & AVFMT_GLOBALHEADER;
                       if( globalHeader )
                           codecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
                       codecContext->codec_id = codecID;
                       codecContext->codec_type = AVMEDIA_TYPE_VIDEO;
                       codecContext->width = width;
                       codecContext->height = height;
                       codecContext->time_base.den = fps;
                       codecContext->time_base.num = 1;
                       codecContext->bit_rate = videoBitrate;
                       codecContext->pix_fmt = PIX_FMT_YUV420P;
                       codecContext->flags |= CODEC_FLAG_QSCALE;
                       codecContext->global_quality = FF_QP2LAMBDA * 5;

                       int res = avcodec_open2( codecContext, codec, nullptr );

                       if( res >= 0 )
                       {
                           auto codecID = AV_CODEC_ID_VORBIS;
                           auto codec = avcodec_find_encoder( codecID );

                           if( codec )
                           {
                               m_audioStream = avformat_new_stream( m_context, codec );
                               assert( m_audioStream );
                               // initalize codec
                               auto codecContext = m_audioStream->codec;

                               bool globalHeader = m_context->oformat->flags & AVFMT_GLOBALHEADER;
                               if( globalHeader )
                                   codecContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
                               codecContext->codec_id = codecID;
                               codecContext->codec_type = AVMEDIA_TYPE_AUDIO;
                               codecContext->sample_fmt = AV_SAMPLE_FMT_FLTP;
                               codecContext->bit_rate = audioBitrate;
                               codecContext->sample_rate = audioSampleRate;
                               codecContext->channels = stereo ? 2 : 1;
                               codecContext->channel_layout = stereo ? AV_CH_LAYOUT_STEREO : AV_CH_LAYOUT_MONO;

                               res = avcodec_open2( codecContext, codec, nullptr );

                               if( res >= 0 )
                               {
                                   // try to open the file
                                   if( avio_open( &m_context->pb, m_context->filename, AVIO_FLAG_WRITE ) >= 0 )
                                   {
                                       m_audioFrame->nb_samples = codecContext->frame_size;
                                       m_audioFrame->format = codecContext->sample_fmt;
                                       m_audioFrame->channel_layout = codecContext->channel_layout;

                                       boost::posix_time::ptime time_t_epoch( boost::gregorian::date( 1970, 1, 1 ) );
                                       m_context->start_time_realtime = ( boost::posix_time::microsec_clock::universal_time( ) - time_t_epoch ).total_microseconds( );
                                       m_startTime = -1;

                                       // write the header
                                       if( avformat_write_header( m_context, nullptr ) >= 0 )
                                       {
                                           return true;
                                       }
                                       else std::cerr << "VVideoWriter: failed to write video header" << std::endl;
                                   }
                                   else std::cerr << "VVideoWriter: failed to open video file " << name << std::endl;
                               }
                               else std::cerr << "VVideoWriter: failed to initialize audio codec" << std::endl;
                           }
                           else std::cerr << "VVideoWriter: requested audio codec is not supported" << std::endl;
                       }
                       else std::cerr << "VVideoWriter: failed to initialize video codec" << std::endl;
                   }
                   else std::cerr << "VVideoWriter: requested video codec is not supported" << std::endl;
               }
               else std::cerr << "VVideoWriter: requested video format is not supported" << std::endl;

               avformat_free_context( m_context );
               m_context = nullptr;
               m_videoStream = nullptr;
               m_audioStream = nullptr;
           }
           return false;
    }

    Writing video :

    bool VVideoWriter::writeVideoFrame( const uint8_t* image, int64_t timestamp )
    {
       if( m_context ) {
           auto codecContext = m_videoStream->codec;
           avpicture_fill( reinterpret_cast( m_videoFrame ),
                           const_cast( image ),
                           codecContext->pix_fmt, codecContext->width, codecContext->height );

           AVPacket pkt;
           av_init_packet( & pkt );
           pkt.data = nullptr;
           pkt.size = 0;
           int gotPacket = 0;
           if( ! avcodec_encode_video2( codecContext, &pkt, m_videoFrame, & gotPacket ) ) {
               if( gotPacket == 1 ) {
                   pkt.stream_index = m_videoStream->index;
                   int res;
                   {
                       pkt.pts = AV_NOPTS_VALUE;
                       pkt.dts = AV_NOPTS_VALUE;
                       pkt.stream_index = m_videoStream->index;
                       res = av_write_frame( m_context, &pkt );
                   }
                   av_free_packet( & pkt );
                   return res >= 0;
               }
               assert( ! pkt.size );
               return true;
           }
       }
       return false;
    }

    Writing audio (now i write test dummy audio) :

    bool VVideoWriter::writeAudioFrame( const int16_t* data, int64_t timestamp )
    {
       if( m_context ) {
           auto codecContext = m_audioStream->codec;

           int buffer_size = av_samples_get_buffer_size(nullptr, codecContext->channels, codecContext->frame_size, codecContext->sample_fmt, 0);

           float *samples = (float*)av_malloc(buffer_size);

           for (int i = 0; i < buffer_size / sizeof(float); i++)
               samples[i] = 1000. * sin((double)i/2.);

           int ret = avcodec_fill_audio_frame( m_audioFrame, codecContext->channels, codecContext->sample_fmt, (const uint8_t*)samples, buffer_size, 0);

           assert( ret >= 0 );
           (void)(ret);

           AVPacket pkt;
           av_init_packet( & pkt );
           pkt.data = nullptr;
           pkt.size = 0;
           int gotPacket = 0;
           if( ! avcodec_encode_audio2( codecContext, &pkt, m_audioFrame, & gotPacket ) ) {
               if( gotPacket == 1 ) {
                   pkt.stream_index = m_audioStream->index;
                   int res;
                   {
                       pkt.pts = AV_NOPTS_VALUE;
                       pkt.dts = AV_NOPTS_VALUE;
                       pkt.stream_index = m_audioStream->index;
                       res = av_write_frame( m_context, &pkt );
                   }
                   av_free_packet( & pkt );
                   return res >= 0;
               }
               assert( ! pkt.size );
               return true;
           }
           return false;
       }
       return false;
    }

    Here’s test example (i send video from webcam and dummy audio) :

    class TestVVideoWriter : public sigslot::has_slots<>
    {
    public:
       TestVVideoWriter( ) :
           m_fileOpened( false )
       {
       }

       void onCapturedFrame( cricket::VideoCapturer*, const cricket::CapturedFrame* capturedFrame )
       {
           if( m_fileOpened ) {
               m_writer.writeVideoFrame( reinterpret_cast<const>( capturedFrame->data ),
                                         capturedFrame->time_stamp / 1000 );
               m_writer.writeAudioFrame( nullptr , 0 );


           } else {
                 m_fileOpened = m_writer.openFile( "http://127.0.0.1:8080",
                                                   15, 40000, capturedFrame->width, capturedFrame->height,
                                                   16000, false, 64000 );
           }
       }

    public:
       vcore::VVideoWriter m_writer;
       bool m_fileOpened;
    };

    TestVVideoWriter testWriter;

    BOOST_AUTO_TEST_SUITE(TEST_VIDEO_WRITER)

    BOOST_AUTO_TEST_CASE(testWritingVideo)
    {
       cricket::LinuxDeviceManager deviceManager;
       std::vector devs;
       if( deviceManager.GetVideoCaptureDevices( &amp;devs ) ) {
           if( devs.size( ) ) {
               boost::shared_ptr camera( deviceManager.CreateVideoCapturer( devs[ 0 ] ) );
               if( camera ) {
                   cricket::VideoFormat format( 320, 240, cricket::VideoFormat::FpsToInterval( 30 ),
                                                camera->GetSupportedFormats( )->front( ).fourcc );
                   cricket::VideoFormat best;
                   if( camera->GetBestCaptureFormat( format, &amp;best ) ) {
                       camera->SignalFrameCaptured.connect( &amp;testWriter, &amp;TestVVideoWriter::onCapturedFrame );
                       if( camera->Start( best ) != cricket::CS_FAILED ) {
                           boost::this_thread::sleep( boost::posix_time::seconds( 10 ) );
                           return;
                       }
                   }
               }
           }
       }
       std::cerr &lt;&lt; "Problem has occured with camera" &lt;&lt; std::endl;
    }

    BOOST_AUTO_TEST_SUITE_END() // TEST_VIDEO_WRITER
    </const>

    But, in this case, gstreamer start playing video only when my test program stop executing (after 10 seconds in this case). It does not suit me, i want gstreamer start playing immediately after starting my test program.

    Could someone help me ?

    P.S. Sorry for my English.