Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (72)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (10729)

  • flv created using ffmpeg library plays too fast

    25 avril 2015, par Muhammad Ali

    I am muxing an h264 annex-b stream and an ADTS AAC stream coming from IP Camera into an FLV. I have gone through all the necessary things (that I knew of) e-g stripping ADTS header from AAC and converting H264 annex-b to AVC.

    I am able to create the flv file which plays but it plays fast. The params of my output format video codec are :-

    Time base = 1/60000   <-- I don't know why
    Bitrate = 591949 (591Kbps)
    GOP Size = 12
    FPS = 30 Fps (that's the rate encoder sends me data at)

    Params for output format audio codec are :-

    Timebase = 1/44100
    Bitrate = 45382 (45Kbps)
    Sample rate = 48000

    I am using NO_PTS for both audio and video.
    The resultant video has double the bit rate (2x(audio bitrate + vid bitrate)) and half the duration.
    If i play the resultant video in ffplay the video playsback fast so it ends quickly but audio plays on its original time. So even after the video has ended quickly the audio still plays till its full duration.

    If I set pts and dts equal to an increasing index (separate indices for audio and video) the video plays Super fast, bit rate shoots to an insane value and video duration gets very short but audio plays fine and on time.

    EDIT :

    Duration: 00:00:09.96, start: 0.000000, bitrate: 1230 kb/s
    Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709), 1280x720 [SAR 1:1 DAR 16:9], 591 kb/s, 30.33 fps, 59.94 tbr, 1k tbn, 59.94 tbc
    Stream #0:1: Audio: aac, 48000 Hz, mono, fltp, 45 kb/s

    Why is tbr 59.94 ? how was that calculated ? maybe that is the problem ?

    Code for muxing :

    if(packet.header.dataType == TRANSFER_PACKET_TYPE_H264)
             {

               if((packet.data[0] == 0x00) && (packet.data[1] == 0x00) && (packet.data[2]==0x00) && (packet.data[3]==0x01))
               {

                 unsigned char tempCurrFrameLength[4];
                 unsigned int nal_unit_length;
                 unsigned char nal_unit_type;
                 unsigned int cursor = 0;
                 int size = packet.header.dataLen;

                  do {

                       av_init_packet(&pkt);
                       int currFrameLength = 0;
                       if((packet.header.frameType == TRANSFER_FRAME_IDR_VIDEO) || (packet.header.frameType == TRANSFER_FRAME_I_VIDEO))
                       {
                         //pkt.flags        |= AV_PKT_FLAG_KEY;
                       }
                       pkt.stream_index  = packet.header.streamId;//0;//ost->st->index;     //stream index 0 for vid : 1 for aud
                       outStreamIndex = outputVideoStreamIndex;
                       /*vDuration += (packet.header.dataPTS - lastvPts);
                       lastvPts = packet.header.dataPTS;
                       pkt.pts = pkt.dts= packet.header.dataPTS;*/
                       pkt.pts = pkt.dts = AV_NOPTS_VALUE;

                       if(framebuff != NULL)
                       {
                         //printf("Framebuff has mem alloc : freeing 1\n\n");
                         free(framebuff);
                         framebuff = NULL;
                         //printf("free successfully \n\n");
                       }


                       nal_unit_length = GetOneNalUnit(&nal_unit_type, packet.data + cursor/*pData+cursor*/, size-cursor);
                       if(nal_unit_length > 0 && nal_unit_type > 0)
                       {

                       }
                       else
                       {
                         printf("Fatal error : nal unit lenth wrong \n\n");
                         exit(0);
                       }
                       write_header_done = 1;
                       //#define _USE_SPS_PPS    //comment this line to write everything on to the stream. SPS+PPSframeframe
                       #ifdef _USE_SPS_PPS
                       if (nal_unit_type == 0x07 /*NAL_SPS*/)
                       { // write sps


                         printf("Got SPS \n");
                         if (_sps == NULL)
                         {
                           _sps_size = nal_unit_length -4;
                           _sps = new U8[_sps_size];
                           memcpy(_sps, packet.data+cursor+4, _sps_size); //exclude start code 0x00000001
                         }

                       }
                       else if (nal_unit_type == 0x08/*NAL_PPS*/)
                       { // write pps

                         printf("Got PPS \n");
                         if (_pps == NULL)
                         {
                           _pps_size = nal_unit_length -4;
                           _pps = new U8[_pps_size];
                           memcpy(_pps, packet.data+cursor+4, _pps_size); //exclude start code 0x00000001

                           //out_stream->codec->extradata
                           //ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata
                           free(ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata);
                           ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata = (uint8_t*)av_mallocz(_sps_size + _pps_size);

                           memcpy(ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata,_sps,_sps_size);
                           memcpy(ofmt_ctx->streams[outputVideoStreamIndex]->codec->extradata + _sps_size,_pps,_pps_size);
                           ret = avformat_write_header(ofmt_ctx, NULL);
                           if (ret < 0) {
                                 //fprintf(stderr, "Error occurred when opening output file\n");
                                 printf("Error occured when opening output \n");
                                 exit(0);
                            }
                            write_header_done = 1;
                           printf("Done writing header \n");

                         }

                       }
                       //else
                       #endif  /*end _USE_SPS_PPS */
                       { //IDR Frame

                           videoPts++;
                           if( (nal_unit_type == 0x06) || (nal_unit_type == 0x09) || (nal_unit_type == 0x07) || (nal_unit_type == 0x08))
                           {
                             av_free_packet(&pkt);
                             cursor += nal_unit_length;
                             continue;
                           }

                         if( (nal_unit_type == 0x05) || (nal_unit_type == 0x05))
                         {
                           //videoPts++;
                         }
                         if ((nal_unit_type != 0x07) && (nal_unit_type != 0x08))
                         {

                           vDuration += (packet.header.dataPTS - lastvPts);
                           lastvPts = packet.header.dataPTS;
                           //pkt.pts = pkt.dts= packet.header.dataPTS;
                           pkt.pts = pkt.dts= AV_NOPTS_VALUE;//videoPts;
                         }
                         else
                         {
                           //probably sps pps ... no need to transmit. free the packet
                           //av_free_packet(&pkt);
                           pkt.pts = pkt.dts = AV_NOPTS_VALUE;
                         }


                         currFrameLength  = nal_unit_length - 4;//packet.header.dataLen -4;
                         tempCurrFrameLength[3] = currFrameLength;
                         tempCurrFrameLength[2] = currFrameLength>>8;
                         tempCurrFrameLength[1] = currFrameLength>>16;
                         tempCurrFrameLength[0] = currFrameLength>>24;



                         if(nal_unit_type == 0x05)
                         {  
                           pkt.flags        |= AV_PKT_FLAG_KEY;
                         }


                         framebuff = (unsigned char *)malloc(sizeof(unsigned char)* /*packet.header.dataLen*/nal_unit_length );
                         if(framebuff == NULL)
                         {
                           printf("Failed to allocate memory for frame \n\n ");
                           exit(0);
                         }


                         memcpy(framebuff, tempCurrFrameLength,0x04);
                         //memcpy(&framebuff[4], &packet.data[4] , currFrameLength);
                         //put_buffer(pData + cursor + 4, nal_unit_length - 4);// save ES data
                         memcpy(framebuff+4,packet.data + cursor + 4, currFrameLength );
                         pkt.data = framebuff;
                         pkt.size = nal_unit_length;//packet.header.dataLen ;

                         //printf("\nPrinting Frame| Size: %d | NALU Lenght: %d | NALU: %02x \n",pkt.size,nal_unit_length ,nal_unit_type);

                         /* GET READY TO TRANSMIT THE packet */
                         //pkt.duration = vDuration;
                         in_stream  = ifmt_ctx->streams[pkt.stream_index];
                         out_stream = ofmt_ctx->streams[outStreamIndex];
                         cn = out_stream->codec;
                         //av_packet_rescale_ts(&pkt, cn->time_base, out_stream->time_base);
                         //pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
                         //pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
                         //pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
                         pkt.pos = -1;
                         pkt.stream_index = outStreamIndex;

                          if (!write_header_done)
                          {
                           }
                           else
                           {
                         //doxygen suggests i use av_write_frame if i am taking care of interleaving
                         ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
                         //ret = av_write_frame(ofmt_ctx, &pkt);
                         if (ret < 0)
                          {
                              fprintf(stderr, "Error muxing Video packet\n");
                              continue;
                          }
                          }

                         /*for(int ii = 0; ii < pkt.size ; ii++)
                           printf("%02x ",framebuff[ii]);*/

                          av_free_packet(&pkt);
                          if(framebuff != NULL)
                           {
                             //printf("Framebuff has mem alloc : freeing 2\n\n");
                             free(framebuff);
                             framebuff = NULL;
                             //printf("Freeing successfully \n\n");
                           }

                         /* TRANSMIT DONE */

                       }
                     cursor += nal_unit_length;
                     }while(cursor < size);


               }
               else
               {
                 printf("This is not annex B bitstream \n\n");
                 for(int ii = 0; ii < packet.header.dataLen ; ii++)
                   printf("%02x ",packet.data[ii]);
                 printf("\n\n");
                 exit(0);
               }

               //video frame has been parsed completely.
               continue;

             }
           else if(packet.header.dataType == TRANSFER_PACKET_TYPE_AAC)
             {

               av_init_packet(&pkt);


               pkt.flags = 1;          
               pkt.pts = audioPts*1024;
               pkt.dts = audioPts*1024;
               //pkt.duration = 1024;    

               pkt.stream_index  = packet.header.streamId + 1;//1;//ost->st->index;     //stream index 0 for vid : 1 for aud
               outStreamIndex = outputAudioStreamIndex;
               //aDuration += (packet.header.dataPTS - lastaPts);
               //lastaPts = packet.header.dataPTS;

               //NOTE: audio sync requires this value
               pkt.pts = pkt.dts= AV_NOPTS_VALUE ;
               //pkt.pts = pkt.dts=audioPts++;
               pkt.data = (uint8_t *)packet.data;//raw_data;
               pkt.size          = packet.header.dataLen;
             }


           //packet.header.streamId
           //now assigning pkt.data in repsective if statements above
           //pkt.data          = (uint8_t *)packet.data;//raw_data;
           //pkt.size          = packet.header.dataLen;


           //pkt.duration = 24000;      //24000 assumed basd on observation

           //duration calculation
           /*if(packet.header.dataType == TRANSFER_PACKET_TYPE_H264)
             {
               pkt.duration = vDuration;

             }
           else*/ if(packet.header.dataType == TRANSFER_PACKET_TYPE_AAC)
             {
               //pkt.duration = aDuration;
             }

           in_stream  = ifmt_ctx->streams[pkt.stream_index];
           out_stream = ofmt_ctx->streams[outStreamIndex];

           cn = out_stream->codec;

           if(packet.header.dataType == TRANSFER_PACKET_TYPE_AAC)
            ret= av_bitstream_filter_filter(aacbsfc, in_stream->codec, NULL, &pkt.data, &pkt.size, packet.data/*pkt.data*/, packet.header.dataLen, pkt.flags & AV_PKT_FLAG_KEY);

             if(ret < 0)
             {
               printf("Failed to execute aac bitstream filter \n\n");
               exit(0);
             }

           //if(packet.header.dataType == TRANSFER_PACKET_TYPE_H264)
           // av_bitstream_filter_filter(h264bsfc, in_stream->codec, NULL, &pkt.data, &pkt.size, packet.data/*pkt.data*/, pkt.size, 0);

             pkt.flags = 1;              


              //NOTE : Commented the lines below synced audio and video streams

             //av_packet_rescale_ts(&pkt, cn->time_base, out_stream->time_base);

             //pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);

             //pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);


             //pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);



           //enabled on Tuesday
           pkt.pos = -1;

           pkt.stream_index = outStreamIndex;

           //doxygen suggests i use av_write_frame if i am taking care of interleaving
           ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
           //ret = av_write_frame(ofmt_ctx, &pkt);
           if (ret < 0)
            {
                fprintf(stderr, "Error muxing packet\n");
                continue;
            }

          av_free_packet(&pkt);
          if(framebuff != NULL)
           {
             //printf("Framebuff has mem alloc : freeing 2\n\n");
             free(framebuff);
             framebuff = NULL;
             //printf("Freeing successfully \n\n");
           }
         }
  • Splitting audio tracks with incorrect length - FFMPEG

    26 mars 2018, par channae

    Version : com.writingminds:FFmpegAndroid:0.3.2

    I have an audio file with length 43 seconds. And I wrote an algorithm to split at each 10 seconds mark where a word ends (For this I used IBM Watson to get ending timestamp). So cropping duration is always around 10 seconds to 11 seconds. Of course except the 5th one. I have printed my commands so that you will understand my use-case better.

    System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:00.000 -codec copy -t 00:00:10.010 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_1.wav

    System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:10.010 -codec copy -t 00:00:21.090 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_2.wav

    System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:21.090 -codec copy -t 00:00:30.480 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_3.wav

    System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:30.480 -codec copy -t 00:00:40.120 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_4.wav

    System.out: Split Command: -y -i /storage/emulated/0/AudioClipsForSpeakerRecognition/merge.wav -ss 00:00:40.120 -codec copy -t 00:00:43.000 /storage/emulated/0/AudioClipsForSpeakerRecognition/segment_5.wav

    However when playing all cropped audio files I noticed segment_1 is about 10 seconds and segment_2 is about 20 seconds etc. Therefore some of the audio parts belong to segment_1 also available in segment 2 etc etc. Why is this happening ?

    Appreciate your response.

  • Gujarati text properly not show on output video using FFmpeg Library in Android

    15 mars 2024, par sanjay dangar

    Gujarati text overly on video using FFmpeg Library on Android

    


    below my code in Gujarati text twin word proper does not show

    


    I am Enter this text ="વડાપ્રધાનશ્રી નરેન્દ્રભાઇ મોદી." but output video in proper not show

    


     fun addFrame2VideoEditFun(&#xA;        videoPath: String,&#xA;        fontPath: String,&#xA;        outputPath: String&#xA;    ): Array<string> {&#xA;        val inputs: ArrayList<string> = ArrayList()&#xA;     var textFile = "વડાપ્રધાનશ્રી નરેન્દ્રભાઇ મોદી."&#xA;        inputs.apply {&#xA;            add("-i")&#xA;            add(videoPath)&#xA;            add("-vf")&#xA;            add("drawtext=fontfile=$fontPath:text=$textFile:fontsize=24:fontcolor=white:x=(w-text_w)/2:y=h-line_h")&#xA;            add("-c:a")&#xA;            add("copy")&#xA;            add(outputPath)&#xA;        }&#xA;        return inputs.toArray(arrayOfNulls(inputs.size))&#xA;    }&#xA;</string></string>

    &#xA;

    my output video in below text show

    &#xA;

    my output video in show this text

    &#xA;

    Gujarati text properly does not show on output video using FFmpeg Library.

    &#xA;