Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (12)

  • Emballe médias : à quoi cela sert ?

    4 février 2011, par

    Ce plugin vise à gérer des sites de mise en ligne de documents de tous types.
    Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;

  • Déploiements possibles

    31 janvier 2010, par

    Deux types de déploiements sont envisageable dépendant de deux aspects : La méthode d’installation envisagée (en standalone ou en ferme) ; Le nombre d’encodages journaliers et la fréquentation envisagés ;
    L’encodage de vidéos est un processus lourd consommant énormément de ressources système (CPU et RAM), il est nécessaire de prendre tout cela en considération. Ce système n’est donc possible que sur un ou plusieurs serveurs dédiés.
    Version mono serveur
    La version mono serveur consiste à n’utiliser qu’une (...)

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (3546)

  • ffmpeg piped output producing incorrect metadata frame count with Python

    6 décembre 2024, par Xorgon

    Using Python, I am attempting to use ffmpeg to compress videos and put them in a PowerPoint. This works great, however, the video files themselves have incorrect frame counts which can cause issues when I read from those videos in other code.

    


    Edit for clarification : by "frame count" I mean the metadata frame count. The actual number of frames contained in the video is correct, but querying the metadata gives an incorrect frame count.

    


    Having eliminated the PowerPoint aspect of the code, I've narrowed this down to the following minimal reproducing example of saving an output from an ffmpeg pipe :

    


    from subprocess import Popen, PIPE

video_path = 'test_mp4.mp4'

ffmpeg_pipe = Popen(['ffmpeg',
                     '-y',  # Overwrite files
                     '-i', f'{video_path}',  # Input from file
                     '-f', 'avi',  # Output format
                     '-c:v', 'libx264',  # Codec
                     '-'],  # Output to pipe
                    stdout=PIPE)

new_path = "piped_video.avi"
vid_file = open(new_path, "wb")
vid_file.write(ffmpeg_pipe.stdout.read())
vid_file.close()


    


    I've tested several different videos. One small example video that I've tested can be found here.

    


    I've tried a few different codecs with avi format and tried libvpx with webm format. For the avi outputs, the frame count usually reads as 1073741824 (2^30). Weirdly, for the webm format, the frame count read as -276701161105643264.

    


    This is a snippet I used to read the frame count, but one could also see the error by opening the video details in Windows Explorer and seeing the total time as something like 9942 hours, 3 minutes, and 14 seconds.

    


    import cv2

video_path = 'test_mp4.mp4'
new_path = "piped_video.webm"

cap = cv2.VideoCapture(video_path)
print(f"Original video frame count: = {int(cap.get(cv2.CAP_PROP_FRAME_COUNT)):d}")
cap.release()

cap = cv2.VideoCapture(new_path)
print(f"Piped video frame count: = {int(cap.get(cv2.CAP_PROP_FRAME_COUNT)):d}")
cap.release()


    


    For completeness, here is the ffmpeg output :

    


    ffmpeg version 2023-06-11-git-09621fd7d9-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)
  configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-d3d11va --enable-dxva2 --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
  libavutil      58. 13.100 / 58. 13.100
  libavcodec     60. 17.100 / 60. 17.100
  libavformat    60.  6.100 / 60.  6.100
  libavdevice    60.  2.100 / 60.  2.100
  libavfilter     9.  8.101 /  9.  8.101
  libswscale      7.  3.100 /  7.  3.100
  libswresample   4. 11.100 /  4. 11.100
  libpostproc    57.  2.100 / 57.  2.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_mp4.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: isommp42
    creation_time   : 2022-08-10T12:54:09.000000Z
  Duration: 00:00:06.67, start: 0.000000, bitrate: 567 kb/s
  Stream #0:0[0x1](eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 384x264 [SAR 1:1 DAR 16:11], 563 kb/s, 30 fps, 30 tbr, 30k tbn (default)
    Metadata:
      creation_time   : 2022-08-10T12:54:09.000000Z
      handler_name    : Mainconcept MP4 Video Media Handler
      vendor_id       : [0][0][0][0]
      encoder         : AVC Coding
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 @ 0000018c68c8b9c0] using SAR=1/1
[libx264 @ 0000018c68c8b9c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0000018c68c8b9c0] profile High, level 2.1, 4:2:0, 8-bit
Output #0, avi, to 'pipe:':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: isommp42
    ISFT            : Lavf60.6.100
  Stream #0:0(eng): Video: h264 (H264 / 0x34363248), yuv420p(progressive), 384x264 [SAR 1:1 DAR 16:11], q=2-31, 30 fps, 30 tbn (default)
    Metadata:
      creation_time   : 2022-08-10T12:54:09.000000Z
      handler_name    : Mainconcept MP4 Video Media Handler
      vendor_id       : [0][0][0][0]
      encoder         : Lavc60.17.100 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
[out#0/avi @ 0000018c687f47c0] video:82kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 3.631060%
frame=  200 fps=0.0 q=-1.0 Lsize=      85kB time=00:00:06.56 bitrate= 106.5kbits/s speed=76.2x    
[libx264 @ 0000018c68c8b9c0] frame I:1     Avg QP:16.12  size:  3659
[libx264 @ 0000018c68c8b9c0] frame P:80    Avg QP:21.31  size:   647
[libx264 @ 0000018c68c8b9c0] frame B:119   Avg QP:26.74  size:   243
[libx264 @ 0000018c68c8b9c0] consecutive B-frames:  3.0% 53.0%  0.0% 44.0%
[libx264 @ 0000018c68c8b9c0] mb I  I16..4: 17.6% 70.6% 11.8%
[libx264 @ 0000018c68c8b9c0] mb P  I16..4:  0.8%  1.7%  0.6%  P16..4: 17.6%  4.6%  3.3%  0.0%  0.0%    skip:71.4%
[libx264 @ 0000018c68c8b9c0] mb B  I16..4:  0.1%  0.3%  0.2%  B16..8: 11.7%  1.4%  0.4%  direct: 0.6%  skip:85.4%  L0:32.0% L1:59.7% BI: 8.3%
[libx264 @ 0000018c68c8b9c0] 8x8 transform intra:59.6% inter:62.4%
[libx264 @ 0000018c68c8b9c0] coded y,uvDC,uvAC intra: 48.5% 0.0% 0.0% inter: 3.5% 0.0% 0.0%
[libx264 @ 0000018c68c8b9c0] i16 v,h,dc,p: 19% 39% 25% 17%
[libx264 @ 0000018c68c8b9c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 25% 30%  3%  3%  4%  4%  4%  5%
[libx264 @ 0000018c68c8b9c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 20% 16%  6%  8%  8%  8%  5%  6%
[libx264 @ 0000018c68c8b9c0] i8c dc,h,v,p: 100%  0%  0%  0%
[libx264 @ 0000018c68c8b9c0] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0000018c68c8b9c0] ref P L0: 76.2%  7.9% 11.2%  4.7%
[libx264 @ 0000018c68c8b9c0] ref B L0: 85.6% 12.9%  1.5%
[libx264 @ 0000018c68c8b9c0] ref B L1: 97.7%  2.3%
[libx264 @ 0000018c68c8b9c0] kb/s:101.19


    


    So the question is : why does this happen, and how can one avoid it ?

    


  • Using libav to encode RGBA frames into MP4 but the output is a mess

    5 octobre 2019, par Cu2S

    I’m trying to decode a video into RGB frames, and then postprocess the frames, finally encode the frames into a video. But the output video is completely a mess :
    Screenshot from potplayer

    I wrote a minimal example to illustrate my idea. First, I read some information from some source video :

       AVFormatContext* inputFormatCtx = nullptr;
       int ret = avformat_open_input(&inputFormatCtx, inputParamsVideo, nullptr, nullptr);
       assert(ret >= 0);
       ret = avformat_find_stream_info(inputFormatCtx, NULL);
       av_dump_format(inputFormatCtx, 0, inputParamsVideo, 0);

       assert(ret >= 0);
       AVStream* inputVideoStream = nullptr;
       for (int i = 0; i < inputFormatCtx->nb_streams; i++)
       {
           const auto inputStream = inputFormatCtx->streams[i];
           if (inputStream->codec->codec_type == AVMEDIA_TYPE_VIDEO)
           {
               inputVideoStream = inputStream;
               break;
           }
       }

       assert(inputVideoStream != nullptr);
       AVCodecParameters* inputParams = inputVideoStream->codecpar;
       AVRational framerate = inputVideoStream->codec->framerate;
       auto gop_size = inputVideoStream->codec->gop_size;
       auto maxBFrames = inputVideoStream->codec->max_b_frames;

    Then I assign the information to the output stream :

    AVFormatContext *outputAVFormat = nullptr;
    avformat_alloc_output_context2(&outputAVFormat, nullptr, nullptr, kOutputPath);
    assert(outputAVFormat);
    AVCodec* codec = avcodec_find_encoder(outputAVFormat->oformat->video_codec);
    assert(codec);
    AVCodecContext* encodingCtx = avcodec_alloc_context3(codec);
    avcodec_parameters_to_context(encodingCtx, inputParams);
    encodingCtx->time_base = av_inv_q(framerate);
    encodingCtx->max_b_frames = maxBFrames;
    encodingCtx->gop_size = gop_size;


    if (outputAVFormat->oformat->flags & AVFMT_GLOBALHEADER)
       encodingCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
    AVStream* outStream = avformat_new_stream(outputAVFormat, nullptr);
    assert(outStream != nullptr);
    ret = avcodec_parameters_from_context(outStream->codecpar, encodingCtx);
    assert(ret >= 0);
    outStream->time_base = encodingCtx->time_base;

    Then I convert RGBA frames(which is read from files) into YUV420P via sws_scale, and encoding :

       ret = avcodec_open2(encodingCtx, codec, nullptr);
       assert(ret >= 0);
       av_dump_format(outputAVFormat, 0, kOutputPath, 1);

       ret = avio_open(&outputAVFormat->pb, kOutputPath, AVIO_FLAG_WRITE);
       assert(ret >= 0);
       ret = avformat_write_header(outputAVFormat, nullptr);
       assert(ret >= 0);

       AVFrame* frame = av_frame_alloc();
       frame->width = inputParams->width;
       frame->height = inputParams->height;
       frame->format = inputParams->format;
       frame->pts = 0;
       assert(ret >= 0);

       ret = av_frame_get_buffer(frame, 32);
       int frameCount = 0;
       assert(ret >= 0);
       ret = av_frame_make_writable(frame);
       assert(ret >= 0);
       SwsContext* swsContext = sws_getContext(inputParams->width, inputParams->height,
           AV_PIX_FMT_RGBA, frame->width,
           frame->height, static_cast<avpixelformat>(inputParams->format),
           SWS_BILINEAR, NULL, NULL, NULL);


       for (auto inputPicPath : std::filesystem::directory_iterator(kInputDir))
       {
           int width, height, comp;
           unsigned char* data = stbi_load(inputPicPath.path().string().c_str(), &amp;width, &amp;height, &amp;comp, 4);
           int srcStrides[1] = { 4 * width };
           int ret = sws_scale(swsContext, &amp;data, srcStrides, 0, height, frame->data,
               frame->linesize);
           assert(ret >= 0);
           frame->pts = frameCount;
           //frame->pict_type = AV_PICTURE_TYPE_I;
           frameCount += 1;
           encode(encodingCtx, frame, 0, outputAVFormat);

           stbi_image_free(data);
       }

       while (encode(encodingCtx, nullptr, 0, outputAVFormat))
       {
           ;
       }

       static bool encode(AVCodecContext* enc_ctx, AVFrame* frame, std::uint32_t streamIndex,
           AVFormatContext * formatCtx)
       {
           int ret;
           int got_output = 0;
           AVPacket packet = {};
           av_init_packet(&amp;packet);
           ret = avcodec_encode_video2(enc_ctx, &amp;packet, frame, &amp;got_output);
           assert(ret >= 0);
           if (got_output) {
               packet.stream_index = streamIndex;
               av_packet_rescale_ts(&amp;packet, enc_ctx->time_base, formatCtx->streams[streamIndex]->time_base);
               ret = av_interleaved_write_frame(formatCtx, &amp;packet);
               assert(ret >= 0);
               return true;
           }
           else {
               return false;
           }
       }
    </avpixelformat>

    Finally I cleaned up stuff :

       av_write_trailer(outputAVFormat);
       sws_freeContext(swsContext);
       avcodec_free_context(&amp;encodingCtx);
       avio_closep(&amp;outputAVFormat->pb);
       avformat_free_context(outputAVFormat);
       av_frame_free(&amp;frame);

    I dumped my input format and my output format :

    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'H:\Me.MP4':
     Metadata:
       major_brand     : mp42
       minor_version   : 1
       compatible_brands: mp41mp42isom
       creation_time   : 2019-04-03T05:44:22.000000Z
     Duration: 00:00:06.90, start: 0.000000, bitrate: 1268 kb/s
       Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 540x960, 1238 kb/s, 29.86 fps, 30 tbr, 600 tbn, 1200 tbc (default)
       Metadata:
         creation_time   : 2019-04-03T05:44:22.000000Z
         handler_name    : Core Media Video
       Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 8000 Hz, stereo, fltp, 24 kb/s (default)
       Metadata:
         creation_time   : 2019-04-03T05:44:22.000000Z
         handler_name    : Core Media Audio
    [libx264 @ 000002126F90C1C0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
    [libx264 @ 000002126F90C1C0] profile High, level 3.1, 4:2:0, 8-bit
    [libx264 @ 000002126F90C1C0] 264 - core 157 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=2 keyint=12 keyint_min=1 scenecut=40 intra_refresh=0 rc_lookahead=12 rc=abr mbtree=1 bitrate=1238 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
    Output #0, mp4, to './output.mp4':
       Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 540x960, q=2-31, 1238 kb/s, 29.86 tbn

    Update :

    After I deleted

    encodingCtx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;

    the output video is right. Also, outputting avi works, too.

  • Manual encoding into MPEG-TS

    4 juillet 2014, par Lane

    SO...

    I am trying to take a H264 Annex B byte stream video and encode it into MPEG-TS in pure Java. My goals is to create a minimal MPEG-TS, Single Program, valid stream and to not include any timing information information (PCR, PTS, DTS).

    I am currently at the point where my generated file can be passed to ffmpeg (ffmpeg -i myVideo.ts) and ffmpeg reports...

    [NULL @ 0x7f8103022600] start time is not set in estimate_timings_from_pts

    Input #0, mpegts, from 'video.ts':
    Duration: N/A, bitrate: N/A
    Program 1
     Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc

    ...it seems like this warning for start time is not a big deal... and ffmpeg is unable to determine how long the video is. If I create another mpeg-ts file from my video file (ffmpeg -i myVideo.ts -vcodec copy validVideo.ts) and run ffmpeg -i validVideo.ts I get...

    Input #0, mpegts, from 'video2.ts':
    Duration: 00:00:11.61, start: 1.400000, bitrate: 3325 kb/s
    Program 1
     Metadata:
       service_name    : Service01
       service_provider: FFmpeg
     Stream #0:0[0x100]: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc

    ...so you can see the timing information and bitrate is there and so is the metadata.

    My H264 video is comprised of only I and P Frames (with the SPS and PPS preceding the I Frame of course) and the way that I am creating my MPEG-TS stream is...

    1. Write a single PAT at the beginning of the file
    2. Write a single PMT at the beginning of the file
    3. Create TS and PES packets from SPS, PPS and I Frame (AUD NALs too, if this is required ?)
    4. Create TS and PES packets from P Frame (again, AUD NALs too, if required)
    5. For the last payload of either an I Frame or P Frame, add filler bytes to an adaptation field to make sure it fits into a full TS packet
    6. Repeat 3-5 for the entire file

    ...my PAT looks like this...

    4740 0010 0000 b00d 0001 c100 0000 01f0
    002a b104 b2ff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff

    ...and my PMT looks like this...

    4750 0010
    0002 b012 0001 c100 00ff fff0 001b e100
    f000 c15b 41e0 ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff

    ...notice after the c100 00, the "ff ff", f0... says that we are not using a PCR... Also notice that I have updated my CRC to reflect this change to the PMT. My first I Frame packet looks like...

    4741 0010 0000 01e0
    0000 8000 0000 0000 0109 f000 0000 0127
    4d40 288d 8d60 2802 dd80 b501 0101 4000
    00fa 4000 3a98 3a18 00b7 2000 3380 2ef2
    e343 0016 e400 0670 05de 5c16 345d c000
    0000 0128 ee3c 8000 0000 0165 8880 0020
    0000 4fe5 63b5 4e90 b11c 9f8f f891 10f3
    13b1 666b 9fc6 03e9 e321 36bf 1788 347b
    eb23 fc89 5772 6e2e 1714 96df ed16 9b30
    252d ceb7 07e9 a0c7 c6e7 9515 be87 2df1
    81f3 b9d2 ba5f 243e 2d5c cba2 8ca5 b798
    6bec 8c43 0b5d bbda bc5b 6e7c e15c 84e8
    2f13 be84

    ...you’ll notice after the 01e0 0000, 8000 00 is the PES header extension where I specify no PTS / DTS and the remaining length is zero. My first P Frame packet looks like...

    4741 001d
    0000 01e0 0000 8000 0000 0000 0109 f000
    0000 0141 9a00 0200 0593 ff45 a7ae 1acd
    f2d7 f9ec 557f cdb6 ba38 60d6 a626 5edb
    4bb9 9783 89e2 d7e1 102e 4625 2fbf ce16
    f952 d8c9 f027 e55a 6b2a 81c3 48d4 6a45
    050a f355 fbec db01 6562 6405 04aa e011
    50ec 0b45 45e5 0df7 2fed a3f8 ac13 2e69
    6739 6d81 f13d 2455 e6ca 1c6b dc96 65d5
    3bad f250 7dab 42e4 7ba9 f564 ee61 29fb
    1b2c 974c 6924 1a1f 99ef 063c b99a c507
    8c22 b0f8 b14c 3e4d 01d0 6120 4e19 8725
    2fda 6550 f907 3f87

    ...and whenever an I Frame or P Frame is ending, I have a TS packet with an adaptation field like...

    4701 003c b000 ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff ffff ffff
    ffff ffff ffff ffff ffff ffff

    ...where the first b0 bytes are the adaptation field stuffing bytes and the remaining ones are the final bytes of the I or P Frame. So as you can tell I can use ffmpeg and pass it my file to create a valid movie in any format. However, I need the file I create to be in the proper format and I cannot quite figure out what the last piece I am missing is. Any ideas ?