Recherche avancée

Médias (1)

Mot : - Tags -/portrait

Autres articles (56)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (7629)

  • how to convert an MPEGTS file into an FLV file by programming with libavcodec

    9 mai 2014, par Hexing B

    I wanna convert an MPEGTS file into an FLV file with libavcodec APIs(just transformat, video/audio codecs are not changed). Following is the code I found on web. After some hack, it can generate an FLV file, which could not be played. Any clue to fix this ?

    #include
    #include
    #include
    #include

    #include <libavformat></libavformat>avformat.h>
    #include <libavcodec></libavcodec>avcodec.h>
    #include <libavutil></libavutil>avutil.h>
    #include <libavutil></libavutil>rational.h>
    #include <libavdevice></libavdevice>avdevice.h>
    #include <libavutil></libavutil>mathematics.h>
    #include <libswscale></libswscale>swscale.h>

    static AVStream* add_output_stream(AVFormatContext* output_format_context, AVStream* input_stream) {
       AVCodecContext* input_codec_context = NULL;
       AVCodecContext* output_codec_context = NULL;

       AVStream* output_stream = NULL;
       output_stream = avformat_new_stream(output_format_context, 0);
       if (!output_stream) {
           printf("Call av_new_stream function failed\n");
           return NULL;
       }

       input_codec_context = input_stream->codec;
       output_codec_context = output_stream->codec;

       output_codec_context->codec_id = input_codec_context->codec_id;
       output_codec_context->codec_type = input_codec_context->codec_type;
       // output_codec_context->codec_tag = input_codec_context->codec_tag;
       output_codec_context->codec_tag = av_codec_get_tag(output_format_context->oformat->codec_tag, input_codec_context->codec_id);
       output_codec_context->bit_rate = input_codec_context->bit_rate;
       output_codec_context->extradata = input_codec_context->extradata;
       output_codec_context->extradata_size = input_codec_context->extradata_size;

       if (av_q2d(input_codec_context->time_base) * input_codec_context->ticks_per_frame > av_q2d(input_stream->time_base) &amp;&amp; av_q2d(input_stream->time_base) &lt; 1.0 / 1000) {
           output_codec_context->time_base = input_codec_context->time_base;
           output_codec_context->time_base.num *= input_codec_context->ticks_per_frame;
       } else {
           output_codec_context->time_base = input_stream->time_base;
       }
       switch (input_codec_context->codec_type) {
       case AVMEDIA_TYPE_AUDIO:
           output_codec_context->channel_layout = input_codec_context->channel_layout;
           output_codec_context->sample_rate = input_codec_context->sample_rate;
           output_codec_context->channels = input_codec_context->channels;
           output_codec_context->frame_size = input_codec_context->frame_size;
           if ((input_codec_context->block_align == 1 &amp;&amp; input_codec_context->codec_id == CODEC_ID_MP3) || input_codec_context->codec_id == CODEC_ID_AC3) {
               output_codec_context->block_align = 0;
           } else {
               output_codec_context->block_align = input_codec_context->block_align;
           }
           break;
       case AVMEDIA_TYPE_VIDEO:
           output_codec_context->pix_fmt = input_codec_context->pix_fmt;
           output_codec_context->width = input_codec_context->width;
           output_codec_context->height = input_codec_context->height;
           output_codec_context->has_b_frames = input_codec_context->has_b_frames;
           if (output_format_context->oformat->flags &amp; AVFMT_GLOBALHEADER) {
               output_codec_context->flags |= CODEC_FLAG_GLOBAL_HEADER;
           }
           // output_codec_context->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
           break;
       default:
           break;
       }

       return output_stream;
    }

    int main(int argc, char* argv[]) {
       const char* input = argv[1];
       const char* output_prefix = NULL;
       char* segment_duration_check = 0;
       const char* index = NULL;
       char* tmp_index = NULL;
       const char* http_prefix = NULL;
       long max_tsfiles = 0;
       double prev_segment_time = 0;
       double segment_duration = 0;

       AVInputFormat* ifmt = NULL;
       AVOutputFormat* ofmt = NULL;
       AVFormatContext* ic = NULL;
       AVFormatContext* oc = NULL;
       AVStream* video_st = NULL;
       AVStream* audio_st = NULL;
       AVCodec* codec = NULL;
       AVDictionary* pAVDictionary = NULL;

       av_register_all();
       av_log_set_level(AV_LOG_DEBUG);

       char szError[256] = {0};
       int nRet = avformat_open_input(&amp;ic, input, ifmt, &amp;pAVDictionary);
       if (nRet != 0) {
           av_strerror(nRet, szError, 256);
           printf(szError);
           printf("\n");
           printf("Call avformat_open_input function failed!\n");
           return 0;
       }

       if (avformat_find_stream_info(ic, NULL) &lt; 0) {
           printf("Call av_find_stream_info function failed!\n");
           return 0;
       }

       ofmt = av_guess_format(NULL, argv[2], NULL);
       if (!ofmt) {
           printf("Call av_guess_format function failed!\n");
           return 0;
       }

       oc = avformat_alloc_context();
       if (!oc) {
           printf("Call av_guess_format function failed!\n");
           return 0;
       }
       oc->oformat = ofmt;

       int video_index = -1, audio_index = -1;
       unsigned int i;
       for (i = 0; i &lt; ic->nb_streams &amp;&amp; (video_index &lt; 0 || audio_index &lt; 0); i++) {
           switch (ic->streams[i]->codec->codec_type) {
           case AVMEDIA_TYPE_VIDEO:
               video_index = i;
               ic->streams[i]->discard = AVDISCARD_NONE;
               video_st = add_output_stream(oc, ic->streams[i]);
               if (video_st->codec->codec_id == CODEC_ID_H264) {
                   video_st->codec->opaque = av_bitstream_filter_init("h264_mp4toannexb");
               }
               break;
           case AVMEDIA_TYPE_AUDIO:
               audio_index = i;
               ic->streams[i]->discard = AVDISCARD_NONE;
               audio_st = add_output_stream(oc, ic->streams[i]);
               if (audio_st->codec->codec_id == CODEC_ID_AAC &amp;&amp; !audio_st->codec->extradata_size) {
                   audio_st->codec->opaque = av_bitstream_filter_init("aac_adtstoasc");
               }
               break;
           default:
               ic->streams[i]->discard = AVDISCARD_ALL;
               break;
           }
       }
       codec = avcodec_find_decoder(video_st->codec->codec_id);
       if (codec == NULL) {
           printf("Call avcodec_find_decoder function failed!\n");
           return 0;
       }

       if (avcodec_open2(video_st->codec, codec, NULL) &lt; 0) {
           printf("Call avcodec_open function failed !\n");
           return 0;
       }

       if (avio_open(&amp;oc->pb, argv[2], AVIO_FLAG_WRITE) &lt; 0) {
           return 0;
       }

       if (avformat_write_header(oc, &amp;pAVDictionary)) {
           printf("Call avformat_write_header function failed.\n");
           return 0;
       }

       int decode_done = 0;
       do {
           AVPacket packet;
           decode_done = av_read_frame(ic, &amp;packet);
           if (decode_done &lt; 0) {
               break;
           }

           if (packet.stream_index == video_index &amp;&amp; (packet.flags &amp; AV_PKT_FLAG_KEY) &amp;&amp; video_st->codec->opaque != NULL) {
               AVPacket pkt = packet;
               int a = av_bitstream_filter_filter(video_st->codec->opaque, video_st->codec, NULL, &amp;pkt.data, &amp;pkt.size,
                       packet.data, packet.size, packet.flags &amp; AV_PKT_FLAG_KEY);
               if (a == 0) {
                   memmove(packet.data, pkt.data, pkt.size);
                   packet.size = pkt.size;
               } else if (a > 0) {
                   packet = pkt;
               }
           }
           else if (packet.stream_index == audio_index &amp;&amp; audio_st->codec->opaque != NULL) {
               AVPacket pkt = packet;
               int a = av_bitstream_filter_filter(audio_st->codec->opaque, audio_st->codec, NULL, &amp;pkt.data, &amp;pkt.size,
                       packet.data, packet.size, packet.flags &amp; AV_PKT_FLAG_KEY);
               if (a == 0) {
                   memmove(packet.data, pkt.data, pkt.size);
                   packet.size = pkt.size;
               } else if (a > 0) {
                   packet = pkt;
               }
           }
           nRet = av_interleaved_write_frame(oc, &amp;packet);
           if (nRet &lt; 0) {
               printf("Call av_interleaved_write_frame function failed\n");
           } else if (nRet > 0) {
               printf("End of stream requested\n");
               av_free_packet(&amp;packet);
                   break;
           }
           av_free_packet(&amp;packet);
       } while(!decode_done);

       av_write_trailer(oc);

       av_bitstream_filter_close(video_st->codec->opaque);  
       av_bitstream_filter_close(audio_st->codec->opaque);  
       avcodec_close(video_st->codec);
       unsigned int k;
       for(k = 0; k &lt; oc->nb_streams; k++) {
           av_freep(&amp;oc->streams[k]->codec);
           av_freep(&amp;oc->streams[k]);
       }
       av_free(oc);
       getchar();
       return 0;
    }
  • Qt Video Recorder

    11 mai 2014, par Davlog

    I am trying to create a video recorder with Qt. What I did so far was taking a screenshot of a rectangle on the screen and save it. At the end I use ffmpeg to get a video file out of the images.

    I connected a timer’s signal timeout() to my custom slot which takes the snapshot and saves it to my tmp folder. The timer has an intervall of 1000 / 30. That should be 30 times per second. But 1000 / 30 is a little bit more than 33 milliseconds so I cannot really get 30 fps. It’s a bit more.

    I recorded a youtube video with my recorder and everything was smooth but a little bit faster / slower depending on the intervall.

    So my question basically is how do I get really 30 / 40 / 50 / ... fps ?

  • Correct command to transmit audio to ip camera using ffmpeg ?

    4 novembre 2016, par the_naive

    So I found some hints in this discussion on the correct command to transmit audio to Axis IP camera through using ffmpeg in windows, but still I have not managed to successfully transmit audio to the camera.

    The command I’m using is the following :

    ffmpeg -v debug -y -re -f dshow -i "audio=Microphone (2- High Definition Audio Device)" -c:a pcm_mulaw -ac 1 -ar 16000 -b:a 128k -f flv http://oper
    ator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi -multiple_requests 1 -reconnect_at_eof 1 -reconnect_streamed 1 -content_type "audio/basic" -report

    The ouput I get following this command is the following :

    ffmpeg started on 2016-11-04 at 17:32:13
    Report written to "ffmpeg-20161104-173213.log"
    Command line:
    ffmpeg -v debug -y -re -f dshow -i "audio=Microphone (2- High Definition Audio Device)" -c:a pcm_mulaw -ac 1 -ar 16000 -b:a 128k -f flv http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi -content_type audio/basic -multiple_requests 1 -reconnect 1 -reconnect_at_eof 1 -reconnect_streamed 1 -report
    ffmpeg version N-82225-gb4e9252 Copyright (c) 2000-2016 the FFmpeg developers
     built with gcc 5.4.0 (GCC)
     configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-libebur128 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
     libavutil      55. 35.100 / 55. 35.100
     libavcodec     57. 66.101 / 57. 66.101
     libavformat    57. 57.100 / 57. 57.100
     libavdevice    57.  2.100 / 57.  2.100
     libavfilter     6. 66.100 /  6. 66.100
     libswscale      4.  3.100 /  4.  3.100
     libswresample   2.  4.100 /  2.  4.100
     libpostproc    54.  2.100 / 54.  2.100
    Splitting the commandline.
    Reading option '-v' ... matched as option 'v' (set logging level) with argument 'debug'.
    Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
    Reading option '-re' ... matched as option 're' (read input at native frame rate) with argument '1'.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'dshow'.
    Reading option '-i' ... matched as input file with argument 'audio=Microphone (2- High Definition Audio Device)'.
    Reading option '-c:a' ... matched as option 'c' (codec name) with argument 'pcm_mulaw'.
    Reading option '-ac' ... matched as option 'ac' (set number of audio channels) with argument '1'.
    Reading option '-ar' ... matched as option 'ar' (set audio sampling rate (in Hz)) with argument '16000'.
    Reading option '-b:a' ... matched as option 'b' (video bitrate (please use -b:v)) with argument '128k'.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'flv'.
    Reading option 'http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi' ... matched as output file.
    Reading option '-content_type' ... matched as AVOption 'content_type' with argument 'audio/basic'.
    Reading option '-multiple_requests' ... matched as AVOption 'multiple_requests' with argument '1'.
    Reading option '-reconnect' ... matched as AVOption 'reconnect' with argument '1'.
    Reading option '-reconnect_at_eof' ... matched as AVOption 'reconnect_at_eof' with argument '1'.
    Reading option '-reconnect_streamed' ... matched as AVOption 'reconnect_streamed' with argument '1'.
    Reading option '-report' ... matched as option 'report' (generate a report) with argument '1'.
    Trailing options were found on the commandline.
    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option v (set logging level) with argument debug.
    Applying option y (overwrite output files) with argument 1.
    Applying option report (generate a report) with argument 1.
    Successfully parsed a group of options.
    Parsing a group of options: input file audio=Microphone (2- High Definition Audio Device).
    Applying option re (read input at native frame rate) with argument 1.
    Applying option f (force format) with argument dshow.
    Successfully parsed a group of options.
    Opening an input file: audio=Microphone (2- High Definition Audio Device).
    [dshow @ 00000000000279e0] Selecting pin Capture on audio only
    dshow passing through packet of type audio size    88200 timestamp 310221040000 orig timestamp 310221040000 graph timestamp 310226130000 diff 5090000 Microphone (2- High Definition Audio Device)
    [dshow @ 00000000000279e0] All info found
    Guessed Channel Layout for Input Stream #0.0 : stereo
    Input #0, dshow, from 'audio=Microphone (2- High Definition Audio Device)':
     Duration: N/A, start: 31022.104000, bitrate: 1411 kb/s
       Stream #0:0, 1, 1/10000000: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s
    Successfully opened the file.
    Parsing a group of options: output file http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi.
    Applying option c:a (codec name) with argument pcm_mulaw.
    Applying option ac (set number of audio channels) with argument 1.
    Applying option ar (set audio sampling rate (in Hz)) with argument 16000.
    Applying option b:a (video bitrate (please use -b:v)) with argument 128k.
    Applying option f (force format) with argument flv.
    Successfully parsed a group of options.
    Opening an output file: http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi.
    [http @ 0000000001c94040] Setting default whitelist 'http,https,tls,rtp,tcp,udp,crypto,httpproxy'
    [http @ 0000000001c94040] request: POST /axis-cgi/audio/transmit.cgi HTTP/1.1

    Transfer-Encoding: chunked

    User-Agent: Lavf/57.57.100

    Accept: */*

    Expect: 100-continue

    Connection: close

    Host: 10.10.210.2

    Icy-MetaData: 1




    [http @ 0000000001c94040] request: POST /axis-cgi/audio/transmit.cgi HTTP/1.1

    Transfer-Encoding: chunked

    User-Agent: Lavf/57.57.100

    Accept: */*

    Connection: close

    Host: 10.10.210.2

    Icy-MetaData: 1

    Authorization: Digest username="operator", realm="AXIS_ACCC8E027F47", nonce="0EcsO3xABQA=ab5efc4740a6c625ecf6a6729d0d67d2b62b615a", uri="/axis-cgi/audio/transmit.cgi", response="4bd3a627b20d6bcaba9e2f595ef6cd2a", algorithm="MD5", qop="auth", cnonce="6a579dd6664b57eb", nc=00000001




    Successfully opened the file.
    detected 8 logical cores
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] Setting 'time_base' to value '1/44100'
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] Setting 'sample_rate' to value '44100'
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] Setting 'sample_fmt' to value 's16'
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] Setting 'channel_layout' to value '0x3'
    [graph 0 input from stream 0:0 @ 0000000001c9f6e0] tb:1/44100 samplefmt:s16 samplerate:44100 chlayout:0x3
    [audio format for output stream 0:0 @ 0000000001c9fa20] Setting 'sample_fmts' to value 's16'
    [audio format for output stream 0:0 @ 0000000001c9fa20] Setting 'sample_rates' to value '16000'
    [audio format for output stream 0:0 @ 0000000001c9fa20] Setting 'channel_layouts' to value '0x4'
    [audio format for output stream 0:0 @ 0000000001c9fa20] auto-inserting filter 'auto-inserted resampler 0' between the filter 'Parsed_anull_0' and the filter 'audio format for output stream 0:0'
    [AVFilterGraph @ 000000000002ab20] query_formats: 4 queried, 6 merged, 3 already done, 0 delayed
    [auto-inserted resampler 0 @ 0000000001ca4060] [SWR @ 0000000001ca4a80] Using s16p internally between filters
    [auto-inserted resampler 0 @ 0000000001ca4060] [SWR @ 0000000001ca4a80] Matrix coefficients:
    [auto-inserted resampler 0 @ 0000000001ca4060] [SWR @ 0000000001ca4a80] FC: FL:0.500000 FR:0.500000
    [auto-inserted resampler 0 @ 0000000001ca4060] ch:2 chl:stereo fmt:s16 r:44100Hz -> ch:1 chl:mono fmt:s16 r:16000Hz
    Output #0, flv, to 'http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi':
     Metadata:
       encoder         : Lavf57.57.100
       Stream #0:0, 0, 1/1000: Audio: pcm_mulaw ([8][0][0][0] / 0x0008), 16000 Hz, mono, s16, 128 kb/s
       Metadata:
         encoder         : Lavc57.66.101 pcm_mulaw
    Stream mapping:
     Stream #0:0 -> #0:0 (pcm_s16le (native) -> pcm_mulaw (native))
    Press [q] to stop, [?] for help
    cur_dts is invalid (this is harmless if it occurs once at the start per stream)
    av_interleaved_write_frame(): Unknown error
    No more output streams to write to, finishing.
    Error writing trailer of http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi: Error number -10053 occurredsize=       8kB time=00:00:00.49 bitrate= 131.2kbits/s speed=79.6x    
    video:0kB audio:8kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 2.492485%
    Input file #0 (audio=Microphone (2- High Definition Audio Device)):
     Input stream #0:0 (audio): 1 packets read (88200 bytes); 1 frames decoded (22050 samples);
     Total: 1 packets (88200 bytes) demuxed
    Output file #0 (http://operator:operator@10.10.210.2/axis-cgi/audio/transmit.cgi):
     Output stream #0:0 (audio): 1 frames encoded (7984 samples); 1 packets muxed (7984 bytes);
     Total: 1 packets (7984 bytes) muxed
    1 frames successfully decoded, 0 decoding errors
    [AVIOContext @ 0000000001c9e4c0] Statistics: 0 seeks, 2 writeouts
    dshow passing through packet of type audio size    12152 timestamp 310226130000 orig timestamp 310226130000 graph timestamp 310226820000 diff 690000 Microphone (2- High Definition Audio Device)
    Conversion failed!

    For some reason, despite setting multiple_requests, reconnect_eof, reconnect_streamed all to true, connection becomes closed.

    Could you please tell me what I’m doing wrong ?