
Recherche avancée
Autres articles (40)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (7344)
-
Reverse Engineering Radius VideoVision
3 avril 2011, par Multimedia Mike — Reverse EngineeringI was called upon to help reverse engineer an old video codec called VideoVision (FourCC : PGVV), ostensibly from a company named Radius. I’m not sure of the details exactly but I think a game developer has a bunch of original FMV data from an old game locked up in this format. The name of the codec sounded familiar. Indeed, we have had a sample in the repository since 2002. Alex B. did some wiki work on the codec some years ago. The wiki mentions that there existed a tool to transcode PGVV data into MJPEG-B data, which is already known and supported by FFmpeg.
The Software
My contacts were able to point me to some software, now safely archived in the PGVV samples directory. There is StudioPlayer2.6.2.sit.hqx which is supposed to be a QuickTime component for working with PGVV data. I can’t even remember how to deal with .sit or .hqx data. Then there is RadiusVVTranscoder101.zip which is the tool that transcodes to MJPEG-B.Disassembling for Reverse Engineering
Since I could actually unpack the transcoder, I set my sights on that. Unpacking the archive sets up a directory structure for a component. There is a binary called RadiusVVTranscoder under RadiusVVTranscoder.component/Contents/MacOS/. Basic deadlisting disassembly is performed via ’otool’ as shown :otool -tV RadiusVVTranscoder | c++filt
This results in a deadlisting of both PowerPC and 32-bit x86 code, as the binary is a "fat" Mac OS X binary designed to run on both architectures. The command line also demangles C++ function signatures which gives useful insight into the parameters passed to a function.
Pretty Pictures
The binary had a lot of descriptive symbols. As a basis for reverse engineering, I constructed call graphs using these symbols. Here are the 2 most relevant portions (click for larger images).The codec initialization generates Huffman tables relevant to the codec :
The main decode function calls AddMJPGFrame which apparently does the heavy lifting for the transcode process :
Based on this tree, I’m guessing that luma blocks can be losslessly transcoded (perhaps with different Huffman tables) which chroma blocks may rely on a different quantization method.
Assembly Constructs
I started looking at the instructions (the x86 ones, of course). The binary uses a calling convention I haven’t seen before, at least not for the x86 : Rather than pushing function arguments onto the stack, the code manually subtracts, e.g., 12 from the ESP register, loads 3 32-bit arguments into memory relative to ESP, and then proceeds with the function call.I’m also a little unclear on constructs such as "call ___i686.get_pc_thunk.bx" seen throughout relevant functions such as MakeRadiusQuantizationTables().
I’m just presenting what I have so far in case anyone else wants to try their hand.
-
How do I sync 4 videos in a grid to play the same frame at the same time ?
28 décembre 2022, par PirateApp- 

- 4 of us have recorded ourselves playing a game and want to create a 4 x 4 video grid
- The game has cutscenes at the beginning followed by each person having their unique part for the rest of the video
- I am looking to synchronize the grid such that it starts at the same place in the cutscene for everyone
- Kindly take a look at what is happening currently. The cutscene is off by a few seconds for everyone
- Imagine a time offset a,b,c,d such that when I add this offet to each video, the entire video grid will be in sync
- How to find this a,b,c,d and more importantly how to add it in filter_complex














I used the ffmpeg command below to generate a 4 x 4 video grid and it seems to work


ffmpeg
 -i nano_prologue.mkv -i macko_nimble_guardian.mkv -i nano_nimble_guardian.mkv -i ghost_nimble_guardian_subtle_arrow_1.mp4
 -filter_complex "
 nullsrc=size=1920x1080 [base];
 [0:v] setpts=PTS-STARTPTS, scale=960x540 [upperleft];
 [1:v] setpts=PTS-STARTPTS, scale=960x540 [upperright];
 [2:v] setpts=PTS-STARTPTS, scale=960x540 [lowerleft];
 [3:v] setpts=PTS-STARTPTS, scale=960x540 [lowerright];
 [base][upperleft] overlay=shortest=1 [tmp1];
 [tmp1][upperright] overlay=shortest=1:x=960 [tmp2];
 [tmp2][lowerleft] overlay=shortest=1:y=540 [tmp3];
 [tmp3][lowerright] overlay=shortest=1:x=960:y=540
 "
 -c:v libx264 output.mkv



My problem though is that since each of us starts recording at slightly different times, the cutscenes are out of sync


As per the screenshot below, you can see that each video has the same scene starting at a slightly different time.


Is there a way to find where the same frame will start on all videos and then sync each video to start from that frame or 20 seconds before that frame ?




UPDATE 1


i have figured out the offset for each video in millisecond precision using the following technique


take a screenshot of the first video at a particular point in the cutscene and save image as png and run the script below for the remaining 3 videos to find out where this screenshot appears in each video


ffmpeg -i "video2.mp4" -r 1 -loop 1 -i screenshot.png -an -filter_complex "blend=difference:shortest=1,blackframe=90:32" -f null -



Use the command above to search for the offset in every video for that cutscene


It gave me this


VIDEO 3 OFFSET


[Parsed_blackframe_1 @ 0x600003af00b0] frame:3144 pblack:92 pts:804861 t:52.399805 type:P last_keyframe:3120

[Parsed_blackframe_1 @ 0x600003af00b0] frame:3145 pblack:96 pts:805117 t:52.416471 type:P last_keyframe:3120



VIDEO 2 OFFSET


[Parsed_blackframe_1 @ 0x6000014dc0b0] frame:3629 pblack:91 pts:60483 t:60.483000 type:P last_keyframe:3500



VIDEO 4 OFFSET


[Parsed_blackframe_1 @ 0x600002f84160] frame:2885 pblack:93 pts:48083 t:48.083000 type:P last_keyframe:2880

[Parsed_blackframe_1 @ 0x600002f84160] frame:2886 pblack:96 pts:48100 t:48.100000 type:P last_keyframe:2880



Now how do I use filter_complex to say start each video at either the frame above or the timestamp above ?. I would like to include say 10 seconds before the above frame in each video so that it starts from the beginning


UPDATE 2


This command currently gives me a 100% synced video, how do I make it start 15 seconds before the specified frame numbers and how to make it use the audio track from video 2 instead ?


ffmpeg
 -i v_nimble_guardian.mkv -i macko_nimble_guardian.mkv -i ghost_nimble_guardian_subtle_arrow_1.mp4 -i nano_nimble_guardian.mkv
 -filter_complex "
 nullsrc=size=1920x1080 [base];
 [0:v] trim=start_pts=49117,setpts=PTS-STARTPTS, scale=960x540 [upperleft];
 [1:v] trim=start_pts=50483,setpts=PTS-STARTPTS, scale=960x540 [upperright];
 [2:v] trim=start_pts=795117,setpts=PTS-STARTPTS, scale=960x540 [lowerleft];
 [3:v] trim=start_pts=38100,setpts=PTS-STARTPTS, scale=960x540 [lowerright];
 [base][upperleft] overlay=shortest=1 [tmp1];
 [tmp1][upperright] overlay=shortest=1:x=960 [tmp2];
 [tmp2][lowerleft] overlay=shortest=1:y=540 [tmp3];
 [tmp3][lowerright] overlay=shortest=1:x=960:y=540
 "
 -c:v libx264 output.mkv



-
Unity : Converting Texture2D to YUV420P and sending with UDP using FFmpeg
22 juin 2018, par potu1304In my Unity game each frame is rendered into a texture and then put together into a video using FFmpeg. Now my questions is if I am doing this right because avcodec_send_frame throws every time an exception.
I am pretty sure that I am doing something wrong or in the wrong order or simply missing something.Here is the code for capturing the texture :
void Update() {
//StartCoroutine(CaptureFrame());
if (rt == null)
{
rect = new Rect(0, 0, captureWidth, captureHeight);
rt = new RenderTexture(captureWidth, captureHeight, 24);
frame = new Texture2D(captureWidth, captureHeight, TextureFormat.RGB24, false);
}
Camera camera = this.GetComponent<camera>(); // NOTE: added because there was no reference to camera in original script; must add this script to Camera
camera.targetTexture = rt;
camera.Render();
RenderTexture.active = rt;
frame.ReadPixels(rect, 0, 0);
frame.Apply();
camera.targetTexture = null;
RenderTexture.active = null;
byte[] fileData = null;
fileData = frame.GetRawTextureData();
encoding(fileData, fileData.Length);
}
</camera>And here is the code for encoding and sending the byte data :
private unsafe void encoding(byte[] bytes, int size)
{
Debug.Log("Encoding...");
AVCodec* codec;
codec = ffmpeg.avcodec_find_encoder(AVCodecID.AV_CODEC_ID_H264);
int ret, got_output = 0;
AVCodecContext* codecContext = null;
codecContext = ffmpeg.avcodec_alloc_context3(codec);
codecContext->bit_rate = 400000;
codecContext->width = captureWidth;
codecContext->height = captureHeight;
//codecContext->time_base.den = 25;
//codecContext->time_base.num = 1;
AVRational timeBase = new AVRational();
timeBase.num = 1;
timeBase.den = 25;
codecContext->time_base = timeBase;
//AVStream* videoAVStream = null;
//videoAVStream->time_base = timeBase;
AVRational frameRate = new AVRational();
frameRate.num = 25;
frameRate.den = 1;
codecContext->framerate = frameRate;
codecContext->gop_size = 10;
codecContext->max_b_frames = 1;
codecContext->pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P;
AVFrame* inputFrame;
inputFrame = ffmpeg.av_frame_alloc();
inputFrame->format = (int)codecContext->pix_fmt;
inputFrame->width = captureWidth;
inputFrame->height = captureHeight;
inputFrame->linesize[0] = inputFrame->width;
AVPixelFormat dst_pix_fmt = AVPixelFormat.AV_PIX_FMT_YUV420P, src_pix_fmt = AVPixelFormat.AV_PIX_FMT_RGBA;
int src_w = 1920, src_h = 1080, dst_w = 1920, dst_h = 1080;
SwsContext* sws_ctx;
GCHandle pinned = GCHandle.Alloc(bytes, GCHandleType.Pinned);
IntPtr address = pinned.AddrOfPinnedObject();
sbyte** inputData = (sbyte**)address;
sws_ctx = ffmpeg.sws_getContext(src_w, src_h, src_pix_fmt,
dst_w, dst_h, dst_pix_fmt,
0, null, null, null);
fixed (int* lineSize = new int[1])
{
lineSize[0] = 4 * captureHeight;
// Convert RGBA to YUV420P
ffmpeg.sws_scale(sws_ctx, inputData, lineSize, 0, codecContext->width, inputFrame->extended_data, inputFrame->linesize);
}
inputFrame->pts = counter++;
if (ffmpeg.avcodec_send_frame(codecContext, inputFrame) < 0)
throw new ApplicationException("Error sending a frame for encoding!");
AVPacket pkt;
pkt = new AVPacket();
//pkt.data = inData;
AVPacket* packet = &pkt;
ffmpeg.av_init_packet(packet);
Debug.Log("pkt.size " + pkt.size);
pinned.Free();
AVDictionary* options = null;
ffmpeg.av_dict_set(&options, "pkt_size", "1300", 0);
ffmpeg.av_dict_set(&options, "buffer_size", "65535", 0);
AVIOContext* server = null;
ffmpeg.avio_open2(&server, "udp://192.168.0.1:1111", ffmpeg.AVIO_FLAG_WRITE, null, &options);
Debug.Log("encoded");
ret = ffmpeg.avcodec_encode_video2(codecContext, &pkt, inputFrame, &got_output);
ffmpeg.avio_write(server, pkt.data, pkt.size);
ffmpeg.av_free_packet(&pkt);
pkt.data = null;
pkt.size = 0;
}And every time I start the game
if (ffmpeg.avcodec_send_frame(codecContext, inputFrame) < 0)
throw new ApplicationException("Error sending a frame for encoding!");throws the exception.
Any help in fixing the issue would be greatly appreciated :)