Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (20)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

  • L’espace de configuration de MediaSPIP

    29 novembre 2010, par

    L’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
    Il permet de configurer finement votre site.
    La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (5292)

  • Ffmpeg set duration using node-fluent-ffmpeg

    23 mai 2013, par Vprnl

    I'm really new to the world of ffmpeg so please excuses me if this is a stupid queston.

    I'm using the module Node-fluent-ffmpeg to stream a movie and convert it from avi to webm.
    So far so good (it plays the video), but I'm having trouble parsing the duration to the player. My ultimate goal is to be able to skip ahead in the movie. But first the player needs to know how long the video is.

    my code is as followed :

    var stat = fs.statSync(movie);

    var start = 0;
    var end = 0;
    var range = req.header('Range');
    if (range != null) {
    start = parseInt(range.slice(range.indexOf('bytes=')+6,
     range.indexOf('-')));
    end = parseInt(range.slice(range.indexOf('-')+1,
     range.length));
    }
    if (isNaN(end) || end == 0) end = stat.size-1;
    if (start > end) return;

    res.writeHead(206, { // NOTE: a partial http response
       'Connection':'close',
       'Content-Type':'video/webm',
       'Content-Length':end - start,
       'Content-Range':'bytes '+start+'-'+end+'/'+stat.size,
       'Transfer-Encoding':'chunked'
    });

    var  proc = new ffmpeg({ source: movie, nolog: true, priority: 1, timeout:15000})
       .toFormat('webm')
       .withVideoBitrate('1024k')
       .addOptions(['-probesize 900000', '-analyzeduration 0', '-bufsize 14000'])
       .writeToStream(res, function(retcode, error){
       if (!error){
           console.log('file has been converted succesfully',retcode);
       }else{
           console.log('file conversion error',error);
       }
    });

    I tried to set the header with a start and a end based on this article : http://delog.wordpress.com/2011/04/25/stream-webm-file-to-chrome-using-node-js/

    I also looked in the FFmpeg documentation and found -f duration and -ss.
    But I don't quite know how to convert the byte range to seconds.

    I feel like I'm pretty close to a solution but my inexperience with the subject matter prohibits me from getting it to work. If I'm unclear in any way please let me know. (I have a tendency of explaining things fuzzy.)

    Thanks in advance !

  • How to reencode flash stream to MP4 or/and WebM inside server implementation ?

    18 avril 2013, par user2294505

    Hi guys, lets start with this that I'm totally new in video streaming.

    Currently I have to support one server implemented in C that work as mediator between stream producers and stream consumers. In my case I have one remote machine that generate flash stream and one or more clients that can watch it. The server work as proxy for all of them. We are using our server side implementation for rtmp protocol and for asynchronous work over HTTP we are using libevent library. All this construction work fine in common case.

    Now we need to transfer this stream to HTML5 clients and we need to support new formats. Our decision was that MP4 and WebM are enough for us. Base of the HTTP request our internal implementation recognized what type of stream client need. For example when the client need MP4 URI is something like this :

    http://192.168.0.5/video.mp4?blah-blah-blah

    where "blah-blah-blah" are few parameters to impersonate client. We have already implemented mechanism that convert input frames to raw pictures and this implementation work fine when we stream JPEGs as again we are using libavformat library to encode raw picture to JPEG. In case with JPEGs the contents of the stream data must contain HTTP meta data with description of every picture. The client stream request is same as this for MP4 stream but instead video.mp4 we are uising jpegstream.htm

    Now I need to convert this input stream to MP4 and/or WebM and here start my problems. For generating Mp4 and WebM videos I'm using ffmpeg libraries and base of one of ffmpeg examples (muxing) I'm trying to convert already generated pictures to currently selected new format. More or less this conversion is OK but after than I don't know why I can't send video to consumer. I'm using next code to prepare avio context :

    int iSize = 4 * 1024;
    unsigned char *ptrBuf = ( unsigned char * )av_malloc( iSize );
    ptrOFC->pb = avio_alloc_context( ptrBuf, iSize, 1, ptrTCDObj, NULL, write_pkg, NULL );
    if ( !ptrTCDObj->ptrOFC->pb ) {
       goto ERROR;
    }
    avformat_write_header( ptrOFC, NULL );

    When the server receive frame from flash we are converting it to corresponding output format with code like this :

    iResult = avcodec_encode_video2( ptrTCDataObj->m_ptrOCC, &packet, pictureFrame, &iGotPacket ) ;

    and write it to stream when succeed and packet exist with :

    av_interleaved_write_frame( ptrOFC, &packet );

    Here our code expect to receive in one moment call to write_pkg function. But nothing happen here :-(. Situation is 100% same if I'm using direct write with av_write_frame. The write_pkg function has very simple body :

    int write_pkg( void *ptrOpaque, uint8_t *ptrBuffer, int iBufferSize )
    {
       STransCoderData_t *ptrTCDObj = ( STransCoderData_t * ) ptrOpaque;
       struct evbuffer *ptrFrameOut;
       ptrFrameOut = evbuffer_new();
       evbuffer_add( ptrFrameOut, ptrBuffer,( size_t ) iBufferSize );
       http_client_write_video( ptrFrameOut, ptrTCDObj->m_ptrHTTPClient, NULL );
       evbuffer_free( ptrFrameOut );
       return iBufferSize;
    }

    Structure STransCoderData_t and function http_client_write_video is not interesting in this moment because we don't reach them for now :-(

    For test consumer I'm using VLC player as open network stream :

    http://192.168.0.5/video.mp4?blah-blah-blah

    VLC don't show anything even errors.

    Any ideas, comments and help are welcome.

  • Correct command to transmit audio to ip camera using ffmpeg ?

    4 novembre 2016, par the_naive

    So I found some hints in this discussion on the correct command to transmit audio to Axis IP camera through using ffmpeg in windows, but still I have not managed to successfully transmit audio to the camera.

    The command I’m using is the following :

    ffmpeg -v debug -y -re -f dshow -i "audio=Microphone (2- High Definition Audio Device)" -c:a pcm_mulaw -ac 1 -ar 16000 -b:a 128k -f flv http://oper
    ator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi -multiple_requests 1 -reconnect_at_eof 1 -reconnect_streamed 1 -content_type "audio/basic" -report

    The ouput I get following this command is the following :

    ffmpeg started on 2016-11-04 at 17:32:13
    Report written to "ffmpeg-20161104-173213.log"
    Command line:
    ffmpeg -v debug -y -re -f dshow -i "audio=Microphone (2- High Definition Audio Device)" -c:a pcm_mulaw -ac 1 -ar 16000 -b:a 128k -f flv http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi -content_type audio/basic -multiple_requests 1 -reconnect 1 -reconnect_at_eof 1 -reconnect_streamed 1 -report
    ffmpeg version N-82225-gb4e9252 Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 5.4.0 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-libebur128 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
     libavutil      55. 35.100 / 55. 35.100
     libavcodec     57. 66.101 / 57. 66.101
     libavformat    57. 57.100 / 57. 57.100
     libavdevice    57.  2.100 / 57.  2.100
     libavfilter     6. 66.100 /  6. 66.100
     libswscale      4.  3.100 /  4.  3.100
     libswresample   2.  4.100 /  2.  4.100
     libpostproc    54.  2.100 / 54.  2.100
    Splitting the commandline.
    Reading option '-v' ... matched as option 'v' (set logging level) with argument 'debug'.
    Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
    Reading option '-re' ... matched as option 're' (read input at native frame rate) with argument '1'.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'dshow'.
    Reading option '-i' ... matched as input file with argument 'audio=Microphone (2- High Definition Audio Device)'.
    Reading option '-c:a' ... matched as option 'c' (codec name) with argument 'pcm_mulaw'.
    Reading option '-ac' ... matched as option 'ac' (set number of audio channels) with argument '1'.
    Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '16000'.
    Reading option '-b:a' ... matched as option 'b' (video bitrate (please use -b:v)) with argument '128k'.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'flv'.
    Reading option 'http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi' ... matched as output file.
    Reading option '-content_type' ... matched as AVOption 'content_type' with argument 'audio/basic'.
    Reading option '-multiple_requests' ... matched as AVOption 'multiple_requests' with argument '1'.
    Reading option '-reconnect' ... matched as AVOption 'reconnect' with argument '1'.
    Reading option '-reconnect_at_eof' ... matched as AVOption 'reconnect_at_eof' with argument '1'.
    Reading option '-reconnect_streamed' ... matched as AVOption 'reconnect_streamed' with argument '1'.
    Reading option '-report' ... matched as option 'report' (generate a report) with argument '1'.
    Trailing options were found on the commandline.
    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option v (set logging level) with argument debug.
    Applying option y (overwrite output files) with argument 1.
    Applying option report (generate a report) with argument 1.
    Successfully parsed a group of options.
    Parsing a group of options: input file audio=Microphone (2- High Definition Audio Device).
    Applying option re (read input at native frame rate) with argument 1.
    Applying option f (force format) with argument dshow.
    Successfully parsed a group of options.
    Opening an input file: audio=Microphone (2- High Definition Audio Device).
    [dshow @ 00000000000279e0] Selecting pin Capture on audio only
    dshow passing through packet of type audio size    88200 timestamp 310221040000 orig timestamp 310221040000 graph timestamp 310226130000 diff 5090000 Microphone (2- High Definition Audio Device)
    [dshow @ 00000000000279e0] All info found
    Guessed Channel Layout for Input Stream #0.0 : stereo
    Input #0, dshow, from 'audio=Microphone (2- High Definition Audio Device)':
     Duration: N/A, start: 31022.104000, bitrate: 1411 kb/s
       Stream #0:0, 1, 1/10000000: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
    Successfully opened the file.
    Parsing a group of options: output file http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi.
    Applying option c:a (codec name) with argument pcm_mulaw.
    Applying option ac (set number of audio channels) with argument 1.
    Applying option ar (set audio sampling rate (in Hz)) with argument 16000.
    Applying option b:a (video bitrate (please use -b:v)) with argument 128k.
    Applying option f (force format) with argument flv.
    Successfully parsed a group of options.
    Opening an output file: http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi.
    [http @ 0000000001c94040] Setting default whitelist 'http,https,tls,rtp,tcp,udp,crypto,httpproxy'
    [http @ 0000000001c94040] request: POST /axis-cgi/audio/transmit.cgi HTTP/1.1

    Transfer-Encoding: chunked

    User-Agent: Lavf/57.57.100

    Accept: */*

    Expect: 100-continue

    Connection: close

    Host: 10.10.210.2

    Icy-MetaData: 1




    [http @ 0000000001c94040] request: POST /axis-cgi/audio/transmit.cgi HTTP/1.1

    Transfer-Encoding: chunked

    User-Agent: Lavf/57.57.100

    Accept: */*

    Connection: close

    Host: 10.10.210.2

    Icy-MetaData: 1

    Authorization: Digest username="operator", realm="AXIS_ACCC8E027F47", nonce="0EcsO3xABQA=ab5efc4740a6c625ecf6a6729d0d67d2b62b615a", uri="/axis-cgi/audio/transmit.cgi", response="4bd3a627b20d6bcaba9e2f595ef6cd2a", algorithm="MD5", qop="auth", cnonce="6a579dd6664b57eb", nc=00000001




    Successfully opened the file.
    detected 8 logical cores
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] Setting 'time_base' to value '1/44100'
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] Setting 'sample_rate' to value '44100'
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] Setting 'sample_fmt' to value 's16'
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] Setting 'channel_layout' to value '0x3'
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] tb:1/44100 samplefmt:s16 samplerate:44100 chlayout:0x3
    [audio format for output stream 0:0 @ 0000000001c9fa20] Setting 'sample_fmts' to value 's16'
    [audio format for output stream 0:0 @ 0000000001c9fa20] Setting 'sample_rates' to value '16000'
    [audio format for output stream 0:0 @ 0000000001c9fa20] Setting 'channel_layouts' to value '0x4'
    [audio format for output stream 0:0 @ 0000000001c9fa20] auto-inserting filter 'auto-inserted resampler 0' between the filter 'Parsed_anull_0' and the filter 'audio format for output stream 0:0'
    [AVFilterGraph @ 000000000002ab20] query_formats: 4 queried, 6 merged, 3 already done, 0 delayed
    [auto-inserted resampler 0 @ 0000000001ca4060] [SWR @ 0000000001ca4a80] Using s16p internally between filters
    [auto-inserted resampler 0 @ 0000000001ca4060] [SWR @ 0000000001ca4a80] Matrix coefficients:
    [auto-inserted resampler 0 @ 0000000001ca4060] [SWR @ 0000000001ca4a80] FC: FL:0.500000 FR:0.500000
    [auto-inserted resampler 0 @ 0000000001ca4060] ch:2 chl:stereo fmt:s16 r:44100Hz -> ch:1 chl:mono fmt:s16 r:16000Hz
    Output #0, flv, to 'http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi':
     Metadata:
       encoder         : Lavf57.57.100
       Stream #0:0, 0, 1/1000: Audio: pcm_mulaw ([8][0][0][0] / 0x0008), 16000 Hz, mono, s16, 128 kb/s
       Metadata:
         encoder         : Lavc57.66.101 pcm_mulaw
    Stream mapping:
     Stream #0:0 -> #0:0 (pcm_s16le (native) -> pcm_mulaw (native))
    Press [q] to stop, [?] for help
    cur_dts is invalid (this is harmless if it occurs once at the start per stream)
    av_interleaved_write_frame(): Unknown error
    No more output streams to write to, finishing.
    Error writing trailer of http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi: Error number -10053 occurredsize=       8kB time=00:00:00.49 bitrate= 131.2kbits/s speed=79.6x    
    video:0kB audio:8kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.492485%
    Input file #0 (audio=Microphone (2- High Definition Audio Device)):
     Input stream #0:0 (audio): 1 packets read (88200 bytes); 1 frames decoded (22050 samples);
     Total: 1 packets (88200 bytes) demuxed
    Output file #0 (http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi):
     Output stream #0:0 (audio): 1 frames encoded (7984 samples); 1 packets muxed (7984 bytes);
     Total: 1 packets (7984 bytes) muxed
    1 frames successfully decoded, 0 decoding errors
    [AVIOContext @ 0000000001c9e4c0] Statistics: 0 seeks, 2 writeouts
    dshow passing through packet of type audio size    12152 timestamp 310226130000 orig timestamp 310226130000 graph timestamp 310226820000 diff 690000 Microphone (2- High Definition Audio Device)
    Conversion failed!

    For some reason, despite setting multiple_requests, reconnect_eof, reconnect_streamed all to true, connection becomes closed.

    Could you please tell me what I’m doing wrong ?