Recherche avancée

Médias (0)

Mot : - Tags -/masques

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (79)

  • Emballe Médias : Mettre en ligne simplement des documents

    29 octobre 2010, par

    Le plugin emballe médias a été développé principalement pour la distribution mediaSPIP mais est également utilisé dans d’autres projets proches comme géodiversité par exemple. Plugins nécessaires et compatibles
    Pour fonctionner ce plugin nécessite que d’autres plugins soient installés : CFG Saisies SPIP Bonux Diogène swfupload jqueryui
    D’autres plugins peuvent être utilisés en complément afin d’améliorer ses capacités : Ancres douces Légendes photo_infos spipmotion (...)

  • Script d’installation automatique de MediaSPIP

    25 avril 2011, par

    Afin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
    Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
    La documentation de l’utilisation du script d’installation (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

Sur d’autres sites (7215)

  • Screenrecorder application output video resolution issues [closed]

    23 juin 2022, par JessieK

    Using Github code for ScreenRecorder on Linux
Everything works fine, besides the resolution of output video.
Tried to play with setting, quality has significantly improved, but still no way to change resolution.
I need to get output video with the same size as input video

    


    using namespace std;

    /* initialize the resources*/
    ScreenRecorder::ScreenRecorder()
    {
    
        av_register_all();
        avcodec_register_all();
        avdevice_register_all();
        cout<<"\nall required functions are registered successfully";
    }
    
    /* uninitialize the resources */
    ScreenRecorder::~ScreenRecorder()
    {
    
        avformat_close_input(&pAVFormatContext);
        if( !pAVFormatContext )
        {
            cout<<"\nfile closed sucessfully";
        }
        else
        {
            cout<<"\nunable to close the file";
            exit(1);
        }
    
        avformat_free_context(pAVFormatContext);
        if( !pAVFormatContext )
        {
            cout<<"\navformat free successfully";
        }
        else
        {
            cout<<"\nunable to free avformat context";
            exit(1);
        }
    
    }
    
    /* function to capture and store data in frames by allocating required memory and auto deallocating the memory.   */
    int ScreenRecorder::CaptureVideoFrames()
    {
        int flag;
        int frameFinished;//when you decode a single packet, you still don't have information enough to have a frame [depending on the type of codec, some of them //you do], when you decode a GROUP of packets that represents a frame, then you have a picture! that's why frameFinished will let //you know you decoded enough to have a frame.
    
        int frame_index = 0;
        value = 0;
    
        pAVPacket = (AVPacket *)av_malloc(sizeof(AVPacket));
        av_init_packet(pAVPacket);
    
        pAVFrame = av_frame_alloc();
        if( !pAVFrame )
        {
         cout<<"\nunable to release the avframe resources";
         exit(1);
        }
    
        outFrame = av_frame_alloc();//Allocate an AVFrame and set its fields to default values.
        if( !outFrame )
        {
         cout<<"\nunable to release the avframe resources for outframe";
         exit(1);
        }
    
        int video_outbuf_size;
        int nbytes = av_image_get_buffer_size(outAVCodecContext->pix_fmt,outAVCodecContext->width,outAVCodecContext->height,32);
        uint8_t *video_outbuf = (uint8_t*)av_malloc(nbytes);
        if( video_outbuf == NULL )
        {
            cout<<"\nunable to allocate memory";
            exit(1);
        }
    
        // Setup the data pointers and linesizes based on the specified image parameters and the provided array.
        value = av_image_fill_arrays( outFrame->data, outFrame->linesize, video_outbuf , AV_PIX_FMT_YUV420P, outAVCodecContext->width,outAVCodecContext->height,1 ); // returns : the size in bytes required for src
        if(value < 0)
        {
            cout<<"\nerror in filling image array";
        }
    
        SwsContext* swsCtx_ ;
    
        // Allocate and return swsContext.
        // a pointer to an allocated context, or NULL in case of error
        // Deprecated : Use sws_getCachedContext() instead.
        swsCtx_ = sws_getContext(pAVCodecContext->width,
                            pAVCodecContext->height,
                            pAVCodecContext->pix_fmt,
                            outAVCodecContext->width,
                    outAVCodecContext->height,
                            outAVCodecContext->pix_fmt,
                            SWS_BICUBIC, NULL, NULL, NULL);
    
    
    int ii = 0;
    int no_frames = 100;
    cout<<"\nenter No. of frames to capture : ";
    cin>>no_frames;
    
        AVPacket outPacket;
        int j = 0;
    
        int got_picture;
    
        while( av_read_frame( pAVFormatContext , pAVPacket ) >= 0 )
        {
        if( ii++ == no_frames )break;
            if(pAVPacket->stream_index == VideoStreamIndx)
            {
                value = avcodec_decode_video2( pAVCodecContext , pAVFrame , &frameFinished , pAVPacket );
                if( value < 0)
                {
                    cout<<"unable to decode video";
                }
    
                if(frameFinished)// Frame successfully decoded :)
                {
                    sws_scale(swsCtx_, pAVFrame->data, pAVFrame->linesize,0, pAVCodecContext->height, outFrame->data,outFrame->linesize);
                    av_init_packet(&outPacket);
                    outPacket.data = NULL;    // packet data will be allocated by the encoder
                    outPacket.size = 0;
    
                    avcodec_encode_video2(outAVCodecContext , &outPacket ,outFrame , &got_picture);
    
                    if(got_picture)
                    {
                        if(outPacket.pts != AV_NOPTS_VALUE)
                            outPacket.pts = av_rescale_q(outPacket.pts, video_st->codec->time_base, video_st->time_base);
                        if(outPacket.dts != AV_NOPTS_VALUE)
                            outPacket.dts = av_rescale_q(outPacket.dts, video_st->codec->time_base, video_st->time_base);
                    
                        printf("Write frame %3d (size= %2d)\n", j++, outPacket.size/1000);
                        if(av_write_frame(outAVFormatContext , &outPacket) != 0)
                        {
                            cout<<"\nerror in writing video frame";
                        }
    
                    av_packet_unref(&outPacket);
                    } // got_picture
    
                av_packet_unref(&outPacket);
                } // frameFinished
    
            }
        }// End of while-loop


    


    One part of two parts is above...Actually original app seem to record video of same size as does my application, but still it has not any use

    



    


    Second part of the code

    


    av_free(video_outbuf);

}

/* establishing the connection between camera or screen through its respective folder */
int ScreenRecorder::openCamera()
{

    value = 0;
    options = NULL;
    pAVFormatContext = NULL;

    pAVFormatContext = avformat_alloc_context();//Allocate an AVFormatContext.
/*

X11 video input device.
To enable this input device during configuration you need libxcb installed on your system. It will be automatically detected during configuration.
This device allows one to capture a region of an X11 display. 
refer : https://www.ffmpeg.org/ffmpeg-devices.html#x11grab
*/
    /* current below is for screen recording. to connect with camera use v4l2 as a input parameter for av_find_input_format */ 
    pAVInputFormat = av_find_input_format("x11grab");
    value = avformat_open_input(&pAVFormatContext, ":0.0+10,250", pAVInputFormat, NULL);
    if(value != 0)
    {
       cout<<"\nerror in opening input device";
       exit(1);
    }

    /* set frame per second */
    value = av_dict_set( &options,"framerate","30",0 );
    if(value < 0)
    {
      cout<<"\nerror in setting dictionary value";
       exit(1);
    }

    value = av_dict_set( &options, "preset", "medium", 0 );
    if(value < 0)
    {
      cout<<"\nerror in setting preset values";
      exit(1);
    }

//  value = avformat_find_stream_info(pAVFormatContext,NULL);
    if(value < 0)
    {
      cout<<"\nunable to find the stream information";
      exit(1);
    }

    VideoStreamIndx = -1;

    /* find the first video stream index . Also there is an API available to do the below operations */
    for(int i = 0; i < pAVFormatContext->nb_streams; i++ ) // find video stream posistion/index.
    {
      if( pAVFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO )
      {
         VideoStreamIndx = i;
         break;
      }

    } 

    if( VideoStreamIndx == -1)
    {
      cout<<"\nunable to find the video stream index. (-1)";
      exit(1);
    }

    // assign pAVFormatContext to VideoStreamIndx
    pAVCodecContext = pAVFormatContext->streams[VideoStreamIndx]->codec;

    pAVCodec = avcodec_find_decoder(pAVCodecContext->codec_id);
    if( pAVCodec == NULL )
    {
      cout<<"\nunable to find the decoder";
      exit(1);
    }

    value = avcodec_open2(pAVCodecContext , pAVCodec , NULL);//Initialize the AVCodecContext to use the given AVCodec.
    if( value < 0 )
    {
      cout<<"\nunable to open the av codec";
      exit(1);
    }
}

/* initialize the video output file and its properties  */
int ScreenRecorder::init_outputfile()
{
    outAVFormatContext = NULL;
    value = 0;
    output_file = "../media/output.mp4";

    avformat_alloc_output_context2(&outAVFormatContext, NULL, NULL, output_file);
    if (!outAVFormatContext)
    {
        cout<<"\nerror in allocating av format output context";
        exit(1);
    }

/* Returns the output format in the list of registered output formats which best matches the provided parameters, or returns NULL if there is no match. */
    output_format = av_guess_format(NULL, output_file ,NULL);
    if( !output_format )
    {
     cout<<"\nerror in guessing the video format. try with correct format";
     exit(1);
    }

    video_st = avformat_new_stream(outAVFormatContext ,NULL);
    if( !video_st )
    {
        cout<<"\nerror in creating a av format new stream";
        exit(1);
    }

    outAVCodecContext = avcodec_alloc_context3(outAVCodec);
    if( !outAVCodecContext )
    {
        cout<<"\nerror in allocating the codec contexts";
        exit(1);
    }

    /* set property of the video file */
    outAVCodecContext = video_st->codec;
    outAVCodecContext->codec_id = AV_CODEC_ID_MPEG4;// AV_CODEC_ID_MPEG4; // AV_CODEC_ID_H264 // AV_CODEC_ID_MPEG1VIDEO
    outAVCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
    outAVCodecContext->pix_fmt  = AV_PIX_FMT_YUV420P;
    outAVCodecContext->bit_rate = 2500000; // 2500000
    outAVCodecContext->width = 1920;
    outAVCodecContext->height = 1080;
    outAVCodecContext->gop_size = 3;
    outAVCodecContext->max_b_frames = 2;
    outAVCodecContext->time_base.num = 1;
    outAVCodecContext->time_base.den = 30; // 15fps

    {
     av_opt_set(outAVCodecContext->priv_data, "preset", "slow", 0);
    }

    outAVCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);
    if( !outAVCodec )
    {
     cout<<"\nerror in finding the av codecs. try again with correct codec";
    exit(1);
    }

    /* Some container formats (like MP4) require global headers to be present
       Mark the encoder so that it behaves accordingly. */

    if ( outAVFormatContext->oformat->flags & AVFMT_GLOBALHEADER)
    {
        outAVCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    }

    value = avcodec_open2(outAVCodecContext, outAVCodec, NULL);
    if( value < 0)
    {
        cout<<"\nerror in opening the avcodec";
        exit(1);
    }

    /* create empty video file */
    if ( !(outAVFormatContext->flags & AVFMT_NOFILE) )
    {
     if( avio_open2(&outAVFormatContext->pb , output_file , AVIO_FLAG_WRITE ,NULL, NULL) < 0 )
     {
      cout<<"\nerror in creating the video file";
      exit(1);
     }
    }

    if(!outAVFormatContext->nb_streams)
    {
        cout<<"\noutput file dose not contain any stream";
        exit(1);
    }

    /* imp: mp4 container or some advanced container file required header information*/
    value = avformat_write_header(outAVFormatContext , &options);
    if(value < 0)
    {
        cout<<"\nerror in writing the header context";
        exit(1);
    }


    cout<<"\n\nOutput file information :\n\n";
    av_dump_format(outAVFormatContext , 0 ,output_file ,1);


    


    Github link https://github.com/abdullahfarwees/screen-recorder-ffmpeg-cpp

    


  • Converting a voice recording into an mp3

    21 juillet 2023, par Raphael M

    For a vue.js messaging project, I'm using the wavesurfer.js library to record voice messages. However Google chrome gives me an audio/webm blob and Safari gives me an audio/mp4 blob.

    


    I'm trying to find a solution to transcode the blob into audio/mp3. I've tried several methods, including ffmpeg. However, ffmpeg gives me an error when compiling "npm run dev" : "Can't resolve '/node_modules/@ffmpeg/core/dist/ffmpeg-core.js'".

    


    "@ffmpeg/core": "^0.11.0",
"@ffmpeg/ffmpeg": "^0.11.6"


    


    I tried to downgrade ffmpeg

    


    "@ffmpeg/core": "^0.9.0",
"@ffmpeg/ffmpeg": "^0.9.8"


    


    I no longer get the error message when compiling, but when I want to convert my audio stream, the console displays a problem with SharedBuffer : "Uncaught (in promise) ReferenceError : SharedArrayBuffer is not defined".

    


    Here's my complete code below.
Is there a reliable way of transcoding the audio stream into mp3 ?

    


    Can you give me an example ?

    


    Thanks

    


    <template>&#xA;  <div class="left-panel">&#xA;    <header class="radial-blue">&#xA;      <div class="container">&#xA;        <h1 class="mb-30">Posez votre premi&#xE8;re question &#xE0; nos th&#xE9;rapeutes</h1>&#xA;        <p><b>Attention</b>, vous disposez seulement de 2 messages. Veillez &#xE0; les utiliser de mani&#xE8;re judicieuse !</p>&#xA;        <div class="available-messages">&#xA;          <div class="item disabled">&#xA;            <span>Message 1</span>&#xA;          </div>&#xA;          <div class="item">&#xA;            <span>Message 2</span>&#xA;          </div>&#xA;        </div>&#xA;      </div>&#xA;    </header>&#xA;  </div>&#xA;  <div class="right-panel">&#xA;    <div class="messagerie bg-light">&#xA;      <messaging ref="messagingComponent"></messaging>&#xA;      <footer>&#xA;        <button type="button"><img src="http://stackoverflow.com/assets/backoffice/images/record-start.svg" style='max-width: 300px; max-height: 300px' /></button>&#xA;        <div class="loading-animation">&#xA;          <img src="http://stackoverflow.com/assets/backoffice/images/record-loading.svg" style='max-width: 300px; max-height: 300px' />&#xA;        </div>&#xA;        <button type="button"><img src="http://stackoverflow.com/assets/backoffice/images/record-stop.svg" style='max-width: 300px; max-height: 300px' /></button>&#xA;        <div class="textarea gradient text-dark">&#xA;          <textarea placeholder="Posez votre question"></textarea>&#xA;        </div>&#xA;        <div class="loading-text">Chargement de votre microphone en cours...</div>&#xA;        <div class="loading-text">Envoi de votre message en cours...</div>&#xA;        <div ref="visualizer"></div>&#xA;        <button type="button"><img src="http://stackoverflow.com/assets/backoffice/images/send.svg" style='max-width: 300px; max-height: 300px' /></button>&#xA;        <div>&#xA;          {{ formatTimer() }}&#xA;        </div>&#xA;      </footer>&#xA;    </div>&#xA;  </div>&#xA;</template>&#xA;&#xA;<code class="echappe-js">&lt;script&gt;&amp;#xA;import Messaging from &quot;./Messaging.vue&quot;;&amp;#xA;import { createFFmpeg, fetchFile } from &amp;#x27;@ffmpeg/ffmpeg&amp;#x27;;&amp;#xA;&amp;#xA;export default {&amp;#xA;  data() {&amp;#xA;    return {&amp;#xA;      isMicrophoneLoading: false,&amp;#xA;      isSubmitLoading: false,&amp;#xA;      isMobile: false,&amp;#xA;      isMessagerie: false,&amp;#xA;      isRecording: false,&amp;#xA;      audioUrl: &amp;#x27;&amp;#x27;,&amp;#xA;      messageText: &amp;#x27;&amp;#x27;,&amp;#xA;      message:null,&amp;#xA;      wavesurfer: null,&amp;#xA;      access:(this.isMobile?&amp;#x27;denied&amp;#x27;:&amp;#x27;granted&amp;#x27;),&amp;#xA;      maxMinutes: 5,&amp;#xA;      orangeTimer: 3,&amp;#xA;      redTimer: 4,&amp;#xA;      timer: 0,&amp;#xA;      timerInterval: null,&amp;#xA;      ffmpeg: null,&amp;#xA;    };&amp;#xA;  },&amp;#xA;  components: {&amp;#xA;    Messaging,&amp;#xA;  },&amp;#xA;  mounted() {&amp;#xA;    this.checkScreenSize();&amp;#xA;    window.addEventListener(&amp;#x27;resize&amp;#x27;, this.checkScreenSize);&amp;#xA;&amp;#xA;    if(!this.isMobile)&amp;#xA;    {&amp;#xA;      this.$moment.locale(&amp;#x27;fr&amp;#x27;);&amp;#xA;      window.addEventListener(&amp;#x27;beforeunload&amp;#x27;, (event) =&gt; {&amp;#xA;        if (this.isMessagerie) {&amp;#xA;          event.preventDefault();&amp;#xA;          event.returnValue = &amp;#x27;&amp;#x27;;&amp;#xA;        }&amp;#xA;      });&amp;#xA;&amp;#xA;      this.initializeWaveSurfer();&amp;#xA;    }&amp;#xA;  },&amp;#xA;  beforeUnmount() {&amp;#xA;    window.removeEventListener(&amp;#x27;resize&amp;#x27;, this.checkScreenSize);&amp;#xA;  },&amp;#xA;  methods: {&amp;#xA;    checkScreenSize() {&amp;#xA;      this.isMobile = window.innerWidth &lt; 1200;&amp;#xA;&amp;#xA;      const windowHeight = window.innerHeight;&amp;#xA;      const navbarHeight = this.$navbarHeight;&amp;#xA;      let padding = parseInt(navbarHeight &amp;#x2B;181);&amp;#xA;&amp;#xA;      const messageListHeight = windowHeight - padding;&amp;#xA;      this.$refs.messagingComponent.$refs.messageList.style.height = messageListHeight &amp;#x2B; &amp;#x27;px&amp;#x27;;&amp;#xA;    },&amp;#xA;    showMessagerie() {&amp;#xA;      this.isMessagerie = true;&amp;#xA;      this.$refs.messagingComponent.scrollToBottom();&amp;#xA;    },&amp;#xA;    checkMicrophoneAccess() {&amp;#xA;      if (navigator.mediaDevices &amp;amp;&amp;amp; navigator.mediaDevices.getUserMedia) {&amp;#xA;&amp;#xA;        return navigator.mediaDevices.getUserMedia({audio: true})&amp;#xA;            .then(function (stream) {&amp;#xA;              stream.getTracks().forEach(function (track) {&amp;#xA;                track.stop();&amp;#xA;              });&amp;#xA;              return true;&amp;#xA;            })&amp;#xA;            .catch(function (error) {&amp;#xA;              console.error(&amp;#x27;Erreur lors de la demande d\&amp;#x27;acc&amp;#xE8;s au microphone:&amp;#x27;, error);&amp;#xA;              return false;&amp;#xA;            });&amp;#xA;      } else {&amp;#xA;        console.error(&amp;#x27;getUserMedia n\&amp;#x27;est pas support&amp;#xE9; par votre navigateur.&amp;#x27;);&amp;#xA;        return false;&amp;#xA;      }&amp;#xA;    },&amp;#xA;    initializeWaveSurfer() {&amp;#xA;      this.wavesurfer = this.$wavesurfer.create({&amp;#xA;        container: &amp;#x27;#visualizer&amp;#x27;,&amp;#xA;        barWidth: 3,&amp;#xA;        barHeight: 1.5,&amp;#xA;        height: 46,&amp;#xA;        responsive: true,&amp;#xA;        waveColor: &amp;#x27;rgba(108,115,202,0.3)&amp;#x27;,&amp;#xA;        progressColor: &amp;#x27;rgba(108,115,202,1)&amp;#x27;,&amp;#xA;        cursorColor: &amp;#x27;transparent&amp;#x27;&amp;#xA;      });&amp;#xA;&amp;#xA;      this.record = this.wavesurfer.registerPlugin(this.$recordPlugin.create());&amp;#xA;    },&amp;#xA;    startRecording() {&amp;#xA;      const _this = this;&amp;#xA;      this.isMicrophoneLoading = true;&amp;#xA;&amp;#xA;      setTimeout(() =&gt;&amp;#xA;      {&amp;#xA;        _this.checkMicrophoneAccess().then(function (accessible)&amp;#xA;        {&amp;#xA;          if (accessible) {&amp;#xA;            _this.record.startRecording();&amp;#xA;&amp;#xA;            _this.record.once(&amp;#x27;startRecording&amp;#x27;, () =&gt; {&amp;#xA;              _this.isMicrophoneLoading = false;&amp;#xA;              _this.isRecording = true;&amp;#xA;              _this.updateChildMessage( &amp;#x27;server&amp;#x27;, &amp;#x27;Allez-y ! Vous pouvez enregistrer votre message audio maintenant. La dur&amp;#xE9;e maximale autoris&amp;#xE9;e pour votre enregistrement est de 5 minutes.&amp;#x27;, &amp;#x27;text&amp;#x27;, &amp;#x27;&amp;#x27;, &amp;#x27;Message automatique&amp;#x27;);&amp;#xA;              _this.startTimer();&amp;#xA;            });&amp;#xA;          } else {&amp;#xA;            _this.isRecording = false;&amp;#xA;            _this.isMicrophoneLoading = false;&amp;#xA;            _this.$swal.fire({&amp;#xA;              title: &amp;#x27;Microphone non d&amp;#xE9;tect&amp;#xE9;&amp;#x27;,&amp;#xA;              html: &amp;#x27;&lt;p&gt;Le microphone de votre appareil est inaccessible ou l\&amp;#x27;acc&amp;#xE8;s a &amp;#xE9;t&amp;#xE9; refus&amp;#xE9;.&lt;/p&gt;&lt;p&gt;Merci de v&amp;#xE9;rifier les param&amp;#xE8;tres de votre navigateur afin de v&amp;#xE9;rifier les autorisations de votre microphone.&lt;/p&gt;&amp;#x27;,&amp;#xA;              footer: &amp;#x27;&lt;a href='http://stackoverflow.com/contact'&gt;Vous avez besoin d\&amp;#x27;aide ?&lt;/a&gt;&amp;#x27;,&amp;#xA;            });&amp;#xA;          }&amp;#xA;        });&amp;#xA;      }, 100);&amp;#xA;    },&amp;#xA;    stopRecording() {&amp;#xA;      this.stopTimer();&amp;#xA;      this.isRecording = false;&amp;#xA;      this.isSubmitLoading = true;&amp;#xA;      this.record.stopRecording();&amp;#xA;&amp;#xA;      this.record.once(&amp;#x27;stopRecording&amp;#x27;, () =&gt; {&amp;#xA;        const blobUrl = this.record.getRecordedUrl();&amp;#xA;        fetch(blobUrl).then(response =&gt; response.blob()).then(blob =&gt; {&amp;#xA;          this.uploadAudio(blob);&amp;#xA;        });&amp;#xA;      });&amp;#xA;    },&amp;#xA;    startTimer() {&amp;#xA;      this.timerInterval = setInterval(() =&gt; {&amp;#xA;        this.timer&amp;#x2B;&amp;#x2B;;&amp;#xA;        if (this.timer === this.maxMinutes * 60) {&amp;#xA;          this.stopRecording();&amp;#xA;        }&amp;#xA;      }, 1000);&amp;#xA;    },&amp;#xA;    stopTimer() {&amp;#xA;      clearInterval(this.timerInterval);&amp;#xA;      this.timer = 0;&amp;#xA;    },&amp;#xA;    formatTimer() {&amp;#xA;      const minutes = Math.floor(this.timer / 60);&amp;#xA;      const seconds = this.timer % 60;&amp;#xA;      const formattedMinutes = minutes &lt; 10 ? `0${minutes}` : minutes;&amp;#xA;      const formattedSeconds = seconds &lt; 10 ? `0${seconds}` : seconds;&amp;#xA;      return `${formattedMinutes}:${formattedSeconds}`;&amp;#xA;    },&amp;#xA;    async uploadAudio(blob)&amp;#xA;    {&amp;#xA;      const format = blob.type === &amp;#x27;audio/webm&amp;#x27; ? &amp;#x27;webm&amp;#x27; : &amp;#x27;mp4&amp;#x27;;&amp;#xA;&amp;#xA;      // Convert the blob to MP3&amp;#xA;      const mp3Blob = await this.convertToMp3(blob, format);&amp;#xA;&amp;#xA;      const s3 = new this.$AWS.S3({&amp;#xA;        accessKeyId: &amp;#x27;xxx&amp;#x27;,&amp;#xA;        secretAccessKey: &amp;#x27;xxx&amp;#x27;,&amp;#xA;        region: &amp;#x27;eu-west-1&amp;#x27;&amp;#xA;      });&amp;#xA;&amp;#xA;      var currentDate = new Date();&amp;#xA;      var filename = currentDate.getDate().toString() &amp;#x2B; &amp;#x27;-&amp;#x27; &amp;#x2B; currentDate.getMonth().toString() &amp;#x2B; &amp;#x27;-&amp;#x27; &amp;#x2B; currentDate.getFullYear().toString() &amp;#x2B; &amp;#x27;--&amp;#x27; &amp;#x2B; currentDate.getHours().toString() &amp;#x2B; &amp;#x27;-&amp;#x27; &amp;#x2B; currentDate.getMinutes().toString() &amp;#x2B; &amp;#x27;.mp4&amp;#x27;;&amp;#xA;&amp;#xA;      const params = {&amp;#xA;        Bucket: &amp;#x27;xxx/audio&amp;#x27;,&amp;#xA;        Key: filename,&amp;#xA;        Body: mp3Blob,&amp;#xA;        ACL: &amp;#x27;public-read&amp;#x27;,&amp;#xA;        ContentType: &amp;#x27;audio/mp3&amp;#x27;&amp;#xA;      }&amp;#xA;&amp;#xA;      s3.upload(params, (err, data) =&gt; {&amp;#xA;        if (err) {&amp;#xA;          console.error(&amp;#x27;Error uploading audio:&amp;#x27;, err)&amp;#xA;        } else {&amp;#xA;          const currentDate = this.$moment();&amp;#xA;          const timestamp = currentDate.format(&amp;#x27;dddd DD MMMM YYYY HH:mm&amp;#x27;);&amp;#xA;&amp;#xA;          this.updateChildMessage( &amp;#x27;client&amp;#x27;, &amp;#x27;&amp;#x27;, &amp;#x27;audio&amp;#x27;, mp3Blob, timestamp);&amp;#xA;          this.isSubmitLoading = false;&amp;#xA;        }&amp;#xA;      });&amp;#xA;    },&amp;#xA;    async convertToMp3(blob, format) {&amp;#xA;      const ffmpeg = createFFmpeg({ log: true });&amp;#xA;      await ffmpeg.load();&amp;#xA;&amp;#xA;      const inputPath = &amp;#x27;input.&amp;#x27; &amp;#x2B; format;&amp;#xA;      const outputPath = &amp;#x27;output.mp3&amp;#x27;;&amp;#xA;&amp;#xA;      ffmpeg.FS(&amp;#x27;writeFile&amp;#x27;, inputPath, await fetchFile(blob));&amp;#xA;&amp;#xA;      await ffmpeg.run(&amp;#x27;-i&amp;#x27;, inputPath, &amp;#x27;-acodec&amp;#x27;, &amp;#x27;libmp3lame&amp;#x27;, outputPath);&amp;#xA;&amp;#xA;      const mp3Data = ffmpeg.FS(&amp;#x27;readFile&amp;#x27;, outputPath);&amp;#xA;      const mp3Blob = new Blob([mp3Data.buffer], { type: &amp;#x27;audio/mp3&amp;#x27; });&amp;#xA;&amp;#xA;      ffmpeg.FS(&amp;#x27;unlink&amp;#x27;, inputPath);&amp;#xA;      ffmpeg.FS(&amp;#x27;unlink&amp;#x27;, outputPath);&amp;#xA;&amp;#xA;      return mp3Blob;&amp;#xA;    },&amp;#xA;    sendMessage() {&amp;#xA;      this.isSubmitLoading = true;&amp;#xA;      if (this.messageText.trim() !== &amp;#x27;&amp;#x27;) {&amp;#xA;        const emmet = &amp;#x27;client&amp;#x27;;&amp;#xA;        const text = this.escapeHTML(this.messageText)&amp;#xA;            .replace(/\n/g, &amp;#x27;&lt;br&gt;&amp;#x27;);&amp;#xA;&amp;#xA;        const currentDate = this.$moment();&amp;#xA;        const timestamp = currentDate.format(&amp;#x27;dddd DD MMMM YYYY HH:mm&amp;#x27;);&amp;#xA;&amp;#xA;        this.$nextTick(() =&gt; {&amp;#xA;          this.messageText = &amp;#x27;&amp;#x27;;&amp;#xA;&amp;#xA;          const textarea = document.getElementById(&amp;#x27;messageTextarea&amp;#x27;);&amp;#xA;          if (textarea) {&amp;#xA;            textarea.scrollTop = 0;&amp;#xA;            textarea.scrollLeft = 0;&amp;#xA;          }&amp;#xA;        });&amp;#xA;&amp;#xA;        this.updateChildMessage(emmet, text, &amp;#x27;text&amp;#x27;, &amp;#x27;&amp;#x27;, timestamp);&amp;#xA;        this.isSubmitLoading = false;&amp;#xA;      }&amp;#xA;    },&amp;#xA;    escapeHTML(text) {&amp;#xA;      const map = {&amp;#xA;        &amp;#x27;&amp;amp;&amp;#x27;: &amp;#x27;&amp;amp;amp;&amp;#x27;,&amp;#xA;        &amp;#x27;&lt;&amp;#x27;: &amp;#x27;&amp;amp;lt;&amp;#x27;,&amp;#xA;        &amp;#x27;&gt;&amp;#x27;: &amp;#x27;&amp;amp;gt;&amp;#x27;,&amp;#xA;        &amp;#x27;&quot;&amp;#x27;: &amp;#x27;&amp;amp;quot;&amp;#x27;,&amp;#xA;        &quot;&amp;#x27;&quot;: &amp;#x27;&amp;amp;#039;&amp;#x27;,&amp;#xA;        &quot;`&quot;: &amp;#x27;&amp;amp;#x60;&amp;#x27;,&amp;#xA;        &quot;/&quot;: &amp;#x27;&amp;amp;#x2F;&amp;#x27;&amp;#xA;      };&amp;#xA;      return text.replace(/[&amp;amp;&lt;&gt;&quot;&amp;#x27;`/]/g, (match) =&gt; map[match]);&amp;#xA;    },&amp;#xA;    updateChildMessage(emmet, text, type, blob, timestamp) {&amp;#xA;      const newMessage = {&amp;#xA;        id: this.$refs.messagingComponent.lastMessageId &amp;#x2B; 1,&amp;#xA;        emmet: emmet,&amp;#xA;        text: text,&amp;#xA;        type: type,&amp;#xA;        blob: blob,&amp;#xA;        timestamp: timestamp&amp;#xA;      };&amp;#xA;&amp;#xA;      this.$refs.messagingComponent.updateMessages(newMessage);&amp;#xA;    }&amp;#xA;  },&amp;#xA;};&amp;#xA;&lt;/script&gt;&#xA;

    &#xA;

  • ffmpeg concatenation after using drawtext filter

    12 août 2016, par Sven Hoskens

    I’m fairly new to ffmpeg, but after a few days of searching on this issue, I’ve completely hit a brick wall. Any help would be appreciated.

    My use case : Our client wants to upload videos for multiple regions. Each video will be the same format, 1920x1080, mp4. For each region, they want to add a different image at the end of the video, for a few seconds. This image contains their logo, some additional info, and a variable code. They will enter this code alongside the uploaded video. The image stays the same, so is already present on the server.
    So basically, I have an input video, a video of an image, and a small code. I need to add this code to the video of the image (in a predefined position), and then I need to add the resulting video to the end of the input video. Once that is complete, I just need to output the video in 1920x1080 and in 1024x576.

    I have tried several things, but the concatenation step always fails with the manipulated video’s.

    Attempt 1

    In my first attempt, I used ffmpeg to create a video from an image, and add the text in the designated area.

    ffmpeg -y -f lavfi -i image.png -r 30 -t 10 -pix_fmt yuv420p -map 0:v -vf drawtext="fontfile=HelveticaNeue.dfont: text='GLNS/TEST/1234b': fontcolor=black: fontsize=20: box=1: boxcolor=white: boxborderw=7: x=179: y=805" imageVideo.mp4

    This command creates a .mp4 video of the correct size, with a duration of 10 seconds, and adds the text ’GLNS/TEST/1234b’ in the correct location.

    Next, I use the following command to concatenate the two videos. Both have the same resolution and codec.

    ffmpeg -f concat -safe 0 -i config.txt -vf scale=1920:1080 outputHD.mp4 -vf scale=1024:576 outputSD.mp4

    config.txt contains following :

    file my_input_file.mp4
    file ImageVideo.mp4

    This concatenation works with regular videos. However, when I use it with ImageVideo.mp4 (the one created by the first command) I get this error log :

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f86dc924600] Auto-inserting h264_mp4toannexb bitstream filtereed=0.509x    
    [aac @ 0x7f86dc019e00] Number of bands (31) exceeds limit (5).
    Error while decoding stream #0:1: Invalid data found when processing input
    [aac @ 0x7f86dc019e00] Number of bands (27) exceeds limit (8).
    Error while decoding stream #0:1: Invalid data found when processing input
    [h264 @ 0x7f86dd857200] Error splitting the input into NAL units.
    [h264 @ 0x7f86dd829400] Invalid NAL unit size.
    [h264 @ 0x7f86dd829400] Error splitting the input into NAL units.
    [aac @ 0x7f86dc019e00] Number of bands (10) exceeds limit (1).
    Error while decoding stream #0:1: Invalid data found when processing input
    [h264 @ 0x7f86dd816800] Invalid NAL unit size.
    [h264 @ 0x7f86dd816800] Error splitting the input into NAL units.
    [aac @ 0x7f86dc019e00] Number of bands (24) exceeds limit (1).
    Error while decoding stream #0:1: Invalid data found when processing input

    #this goes on for a few hundred lines

    The resulting output is identical to the input video, but does not contain the desired image video at the end.

    Attempt 2

    Since the above attempt didn’t work, I tried concatenating a video I let our designer make of the image with Adobe After Effects. This video was also saved as a .mp4 with the H264 codec. If I concatenate the input video and this one, I get a correct result. However, as soon as I add the code in the designated area with this command :

    ffmpeg -i new_image_video.mp4 -vf drawtext="fontfile=HelveticaNeue.dfont: text='GLNS/TEST/1234b': fontcolor=black: fontsize=20: box=1: boxcolor=white: boxborderw=7: x=179: y=805" -c:v libx264 imageVideo.mp4

    I get this error :

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7ff94c800000] Auto-inserting h264_mp4toannexb bitstream filter97x    
    [h264 @ 0x7ff94b053800] top block unavailable for requested intra mode -1
    [h264 @ 0x7ff94b053800] error while decoding MB 0 0, bytestream 49526
    [h264 @ 0x7ff94b053e00] number of reference frames (1+3) exceeds max (3; probably corrupt input), discarding one
    [h264 @ 0x7ff94b053e00] chroma_log2_weight_denom 28 is out of range
    [h264 @ 0x7ff94b053e00] illegal long ref in memory management control operation 2
    [h264 @ 0x7ff94b053e00] cabac_init_idc 32 overflow
    [h264 @ 0x7ff94b053e00] decode_slice_header error
    [h264 @ 0x7ff94b053e00] no frame!
    [h264 @ 0x7ff94b053800] concealing 8160 DC, 8160 AC, 8160 MV errors in I frame
    [h264 @ 0x7ff94b072a00] reference overflow 22 > 15 or 0 > 15
    [h264 @ 0x7ff94b072a00] decode_slice_header error
    [h264 @ 0x7ff94b072a00] no frame!
    [h264 @ 0x7ff94b01a400] illegal modification_of_pic_nums_idc 20
    [h264 @ 0x7ff94b01a400] decode_slice_header error
    [h264 @ 0x7ff94b01a400] no frame!
    [h264 @ 0x7ff94b01aa00] illegal modification_of_pic_nums_idc 20
    [h264 @ 0x7ff94b01aa00] decode_slice_header error
    [h264 @ 0x7ff94b01aa00] no frame!
    Error while decoding stream #0:0: Invalid data found when processing input
    [h264 @ 0x7ff94b053800] deblocking_filter_idc 8 out of range
    [h264 @ 0x7ff94b053800] decode_slice_header error
    [h264 @ 0x7ff94b053800] no frame!
    Error while decoding stream #0:0: Invalid data found when processing input
    [h264 @ 0x7ff94b053e00] illegal memory management control operation 8
    [h264 @ 0x7ff94b053e00] co located POCs unavailable
    [h264 @ 0x7ff94b053e00] error while decoding MB 2 0, bytestream -35
    [h264 @ 0x7ff94b053e00] concealing 8160 DC, 8160 AC, 8160 MV errors in B frame
    [h264 @ 0x7ff94b072a00] number of reference frames (1+3) exceeds max (3; probably corrupt input), discarding one

    # this goes on for a while...

    [h264 @ 0x7ff94b01a400] concealing 4962 DC, 4962 AC, 4962 MV errors in B frame
    Error while decoding stream #0:0: Invalid data found when processing input
    frame= 2553 fps= 17 q=-1.0 Lsize=   26995kB time=00:01:42.16 bitrate=2164.6kbits/s dup=0 drop=60 speed=0.697x    
    video:25258kB audio:1661kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.285236%
    [libx264 @ 0x7ff94b810400] frame I:35    Avg QP:17.45  size: 55070
    [libx264 @ 0x7ff94b810400] frame P:711   Avg QP:19.73  size: 18712
    [libx264 @ 0x7ff94b810400] frame B:1807  Avg QP:21.53  size:  5884
    [libx264 @ 0x7ff94b810400] consecutive B-frames:  3.4%  5.0%  4.9% 86.6%
    [libx264 @ 0x7ff94b810400] mb I  I16..4: 38.2% 49.3% 12.5%
    [libx264 @ 0x7ff94b810400] mb P  I16..4: 12.4% 14.0%  1.0%  P16..4: 29.6%  4.8%  1.9%  0.0%  0.0%    skip:36.2%
    [libx264 @ 0x7ff94b810400] mb B  I16..4:  1.5%  1.2%  0.1%  B16..8: 27.3%  1.6%  0.1%  direct: 1.8%  skip:66.4%  L0:45.8% L1:51.4% BI: 2.8%
    [libx264 @ 0x7ff94b810400] 8x8 transform intra:49.5% inter:85.4%
    [libx264 @ 0x7ff94b810400] coded y,uvDC,uvAC intra: 21.2% 22.3% 2.5% inter: 4.6% 7.0% 0.0%
    [libx264 @ 0x7ff94b810400] i16 v,h,dc,p: 23% 26% 10% 41%
    [libx264 @ 0x7ff94b810400] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 19% 35%  3%  3%  3%  3%  3%  2%
    [libx264 @ 0x7ff94b810400] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 20% 16%  5%  7%  6%  5%  5%  4%
    [libx264 @ 0x7ff94b810400] i8c dc,h,v,p: 67% 16% 15%  2%
    [libx264 @ 0x7ff94b810400] Weighted P-Frames: Y:7.3% UV:4.2%
    [libx264 @ 0x7ff94b810400] ref P L0: 66.3%  8.7% 17.9%  7.0%  0.1%
    [libx264 @ 0x7ff94b810400] ref B L0: 88.2% 10.1%  1.7%
    [libx264 @ 0x7ff94b810400] ref B L1: 94.9%  5.1%
    [libx264 @ 0x7ff94b810400] kb/s:2026.12
    [aac @ 0x7ff94b072400] Qavg: 635.626

    The resulting output is identical to the input video, but does not contain the desired image video at the end.

    One thing I have noticed : When I inspect the video files on mac (Get info) they always contain these lines at ’More info’ :

    Dimensions: 1920 x 1080
    Codecs: H.264, AAC
    Color profile: HD(1-1-1)
    Duration: 01:42
    Audio channels: 2
    Last opened: Today 11:02

    However, the video’s which pass through the drawtext filter have this :

    Dimensions: 1920 x 1080
    Codecs: AAC, H.264
    Duration: 00:10
    Audio channels: 2
    Last opened: Today 11:07

    As you can see, there is no color profile entry, and the codecs have switched places. I assume this is related to my issue, but I can’t seem to find a fix for it.

    PS : The application will run in a php environment (Symfony). I noticed the concat command wasn’t available in the Symfony bundle for ffmpeg, so I’m using the regular terminal commands. I’ll execute these using php.

    EDIT
    Attempt 3

    On advise of a coworker, I tried converting the video to .avi and reconverting to .mp4, in the hopes this would lose any corrupted or extra info included by the drawtext filter. This spits out a completely different error.

    [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f812413da00] Auto-inserting h264_mp4toannexb bitstream filtereed=0.516x    
    [concat @ 0x7f8124009a00] DTS 1569260 &lt; 2551000 out of order
    [h264 @ 0x7f8124846800] left block unavailable for requested intra4x4 mode -1
    [h264 @ 0x7f8124846800] error while decoding MB 0 0, bytestream 47919
    [h264 @ 0x7f8124846800] concealing 8160 DC, 8160 AC, 8160 MV errors in I frame
    [aac @ 0x7f8125809a00] Queue input is backward in time
    [aac @ 0x7f8125815a00] Queue input is backward in time
    [h264 @ 0x7f8124846e00] number of reference frames (1+3) exceeds max (3; probably corrupt input), discarding one
    [h264 @ 0x7f8124846e00] chroma_log2_weight_denom 26 is out of range
    [h264 @ 0x7f8124846e00] deblocking_filter_idc 32 out of range
    [h264 @ 0x7f8124846e00] decode_slice_header error
    [h264 @ 0x7f8124846e00] no frame!
    [mp4 @ 0x7f8124802200] Non-monotonous DTS in output stream 0:1; previous: 4902912, current: 4505491; changing to 4902913. This may result in incorrect timestamps in the output file.
    [mp4 @ 0x7f8125813000] Non-monotonous DTS in output stream 1:1; previous: 4902912, current: 4505491; changing to 4902913. This may result in incorrect timestamps in the output file.
    [h264 @ 0x7f8124803400] reference overflow 20 > 15 or 0 > 15
    [h264 @ 0x7f8124803400] decode_slice_header error
    [h264 @ 0x7f8124803400] no frame!
    [mp4 @ 0x7f8124802200] Non-monotonous DTS in output stream 0:1; previous: 4902913, current: 4506515; changing to 4902914. This may result in incorrect timestamps in the output file.
    [mp4 @ 0x7f8125813000] Non-monotonous DTS in output stream 1:1; previous: 4902913, current: 4506515; changing to 4902914. This may result in incorrect timestamps in the output file.
    [mp4 @ 0x7f8124802200] Non-monotonous DTS in output stream 0:1; previous: 4902914, current: 4507539; changing to 4902915. This may result in incorrect timestamps in the output file.
    [mp4 @ 0x7f8125813000] Non-monotonous DTS in output stream 1:1; previous: 4902914, current: 4507539; changing to 4902915. This may result in incorrect timestamps in the output file.

    # Again, this continues for quite a while.