Recherche avancée

Médias (0)

Mot : - Tags -/flash

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (77)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (7223)

  • FFMpeg : write h264 stream to mp4 container without changes

    3 mars 2017, par Bumblebee

    Good day.

    For brevity, the code omits error handling and memory management.

    I want to capture h264 video stream and pack it to mp4 container without changes. Since i don’t control the source of stream, i can not make assumptions about stream structure. In this way i must probe input stream.

       AVProbeData probeData;
       probeData.buf_size = s->BodySize();
       probeData.buf = s->GetBody();
       probeData.filename = "";

       AVInputFormat* inFormat = av_probe_input_format(&probeData, 1);  

    This code correctly defines h264 stream.

    Next, i create input format context,

       unsigned char* avio_input_buffer = reinterpret_cast<unsigned> (av_malloc(AVIO_BUFFER_SIZE));

       AVIOContext* avio_input_ctx = avio_alloc_context(avio_input_buffer, AVIO_BUFFER_SIZE,
           0, this, &amp;read_packet, NULL, NULL);

       AVFormatContext* ifmt_ctx = avformat_alloc_context();
       ifmt_ctx->pb = avio_input_ctx;

       int ret = avformat_open_input(&amp;ifmt_ctx, NULL, inFormat, NULL);
    </unsigned>

    set image size,

       ifmt_ctx->streams[0]->codec->width = ifmt_ctx->streams[0]->codec->coded_width = width;
       ifmt_ctx->streams[0]->codec->height = ifmt_ctx->streams[0]->codec->coded_height = height;

    create output format context,

       unsigned char* avio_output_buffer = reinterpret_cast<unsigned>(av_malloc(AVIO_BUFFER_SIZE));

       AVIOContext* avio_output_ctx = avio_alloc_context(avio_output_buffer, AVIO_BUFFER_SIZE,
           1, this, NULL, &amp;write_packet, NULL);

       AVFormatContext* ofmt_ctx = nullptr;
       avformat_alloc_output_context2(&amp;ofmt_ctx, NULL, "mp4", NULL);
       ofmt_ctx->pb = avio_output_ctx;

       AVDictionary* dict = nullptr;
       av_dict_set(&amp;dict, "movflags", "faststart", 0);
       av_dict_set(&amp;dict, "movflags", "frag_keyframe+empty_moov", 0);

       AVStream* outVideoStream = avformat_new_stream(ofmt_ctx, nullptr);

       avcodec_copy_context(outVideoStream->codec, ifmt_ctx->streams[0]->codec);

       ret = avformat_write_header(ofmt_ctx, &amp;dict);
    </unsigned>

    Initialization is done. Further there is a shifting packets from h264 stream to mp4 container. I dont calculate pts and dts, because source packet has AV_NOPTS_VALUE in them.

       AVPacket pkt;
       while (...)
       {
           ret = av_read_frame(ifmt_ctx, &amp;pkt);
           ret = av_interleaved_write_frame(ofmt_ctx, &amp;pkt);
           av_free_packet(&amp;pkt);
       }

    Further i write trailer and free allocated memory. That is all. Code works and i got playable mp4 file.

    Now the problem : the stream characteristics of the resulting file is not completely consisent with the characteristics of the source stream. In particular, fps and bitrate is higher than it should be.

    As sample, below is output ffplay.exe for source stream

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'd:/movies/source.mp4':0/0
    Metadata:
        major_brand     : isom
        minor_version   : 1
        compatible_brands: isom
        creation_time   : 2014-04-14T13:03:54.000000Z
    Duration: 00:00:58.08, start: 0.000000, bitrate: 12130 kb/s
    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661),yuv420p, 1920x1080, 12129 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc (default)
    Metadata:
        handler_name    : VideoHandler
    Switch subtitle stream from #-1 to #-1 vq= 1428KB sq=    0B f=0/0
    Seek to 49% ( 0:00:28) of total duration ( 0:00:58)       B f=0/0
        30.32 M-V: -0.030 fd=  87 aq=    0KB vq= 1360KB sq=    0B f=0/0  

    and for resulting stream (contains part of source stream)

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'd:/movies/target.mp4':f=0/0
    Metadata:
        major_brand     : isom
        minor_version   : 512
        compatible_brands: isomiso2avc1iso6mp41
        encoder         : Lavf57.56.101
    Duration: 00:00:11.64, start: 0.000000, bitrate: 18686 kb/s
    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 1920x1080, 18683 kb/s, 38.57 fps, 40 tbr, 90k tbn, 50 tbc (default)
    Metadata:
        handler_name    : VideoHandler
    Switch subtitle stream from #-1 to #-1 vq= 2309KB sq=    0B f=0/0
        5.70 M-V:  0.040 fd= 127 aq=    0KB vq= 2562KB sq=    0B f=0/0  

    So there is a question, what i missed when copying stream ? I will be grateful for any help.

    Best regards

  • Graph-based video processing for .NET

    23 octobre 2016, par Borv

    Does anyone know a good object-oriented library (preferably high-level, like C# or Java) for working with video and audio streams ?

    I wrote an app which fiddles with video and audio streams, feeds and such. The original task was simple :

    • grab an RTSP feed
    • display original feed(s) on the display
    • convert it to a series of h264 ts files
    • extract audio into separate MP3 files
    • upload videos and audio to the web site (preferably in real time, few minute delay is acceptable)

    As you may have already guessed it is about recording events (e.g. lectures) and publishing them on the web.

    To pull this out I needed some graph-based non-linear editing for media. Two weeks in, I tried ffmpeg, vlc and WMF. The only library I got to work is ffmpeg, and that comes with lots of "however". WMF required a lot of coding (and I abandoned this path), vlc looked great on paper, but I stumbled across some bugs with input splitting I could not get around (e.g. transcode:es combination flat out refused to work).

    So, the question. What are good non-linear editing libraries besides ffmpeg, vlc and wmf/directshow that allow for building video processing graphs with sources, sinks and filters ? Or perhaps good bindings over ffmpeg and vlc allowing to build such graphs ?

  • Wave bytes to buffer

    24 août 2016, par Mohammad Abu Musa

    I am encoding wav input from microphone which comes in four bytes format to ogg format. I think I have a problem shifting the bytes to the correct format here is the code I am using

    To explain more I get the audio frames from Google Chrome where I get data as const8 and channels, and samples. data field is always in 4 bytes format.

    I copy the data to a vector of type int16_t then I loop uninterleave samples which I think I am doing wrong. my question is how can I make sure the data is formatted correctly for ogg encoder to handle them correctly ?

    void EncoderInstance::OnGetBuffer(int32_t result, pp::AudioBuffer buffer) {
       if (result != PP_OK)
         return;

       assert(buffer.GetSampleSize() == PP_AUDIOBUFFER_SAMPLESIZE_16_BITS);
       const char* data = static_cast<const>(buffer.GetDataBuffer());
       uint32_t channels = buffer.GetNumberOfChannels();
       uint32_t samples = buffer.GetNumberOfSamples() / channels;

       if (channel_count_ != channels || sample_count_ != samples) {
         channel_count_ = channels;
         sample_count_ = samples;

         samples_.resize(sample_count_ * channel_count_);
         // Try (+ 5) to ensure that we pick up a new set of samples between each
         // timer-generated repaint.
         timer_interval_ = (sample_count_ * 1000) / buffer.GetSampleRate() + 5;
         // Start the timer for the first buffer.
         if (first_buffer_) {
           first_buffer_ = false;
           ScheduleNextTimer();
         }
       }

       if(is_audio_recording &amp;&amp; is_audio_header_written_)
       {
           memcpy(samples_.data(), data,
               sample_count_ * channel_count_ * sizeof(int16_t));

           float **buffer=vorbis_analysis_buffer(&amp;vd,samples);

           /* uninterleave samples */
           for(i=0;i4;i++)
           {
               buffer[0][i]=((samples_.at(i*4+1)&lt;&lt;8)|
                         (0x00ff&amp;(int16_t)samples_.at(i*4)))/32768.f;
               buffer[1][i]=((samples_.at(i*4+3)&lt;&lt;8)|
                         (0x00ff&amp;(int16_t)samples_.at(i*4+2)))/32768.f;
           }

           vorbis_analysis_wrote(&amp;vd,i);

           while(vorbis_analysis_blockout(&amp;vd,&amp;vb)==1){

             /* analysis, assume we want to use bitrate management */
             vorbis_analysis(&amp;vb,NULL);
             vorbis_bitrate_addblock(&amp;vb);

             while(vorbis_bitrate_flushpacket(&amp;vd,&amp;op)){

               /* weld the packet into the bitstream */
               ogg_stream_packetin(&amp;os,&amp;op);

               /* write out pages (if any) */
               while(!eos){
                 int result=ogg_stream_pageout(&amp;os,&amp;og);
                 if(result==0)break;
                 glb_app_thread.message_loop().PostWork(callback_factory_.NewCallback(&amp;EncoderInstance::writeAudioHeader));
                 if(ogg_page_eos(&amp;og))eos=1;
               }
             }
           }


       }




       audio_track_.RecycleBuffer(buffer);
       audio_track_.GetBuffer(callback_factory_.NewCallbackWithOutput(
           &amp;EncoderInstance::OnGetBuffer));

    }
    </const>