
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (51)
-
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...) -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...)
Sur d’autres sites (11086)
-
How to write a video file using FFmpeg
15 janvier 2024, par SummitI am trying to write a video file using FFMPEG but i get the following errors


[libx264 @ 000002bdf90c3c00] broken ffmpeg default settings detected
[libx264 @ 000002bdf90c3c00] use an encoding preset (e.g. -vpre medium)
[libx264 @ 000002bdf90c3c00] preset usage: -vpre <speed> -vpre <profile>
[libx264 @ 000002bdf90c3c00] speed presets are listed in x264 --help
[libx264 @ 000002bdf90c3c00] profile is optional; x264 defaults to high
</profile></speed>


This is my code


#pragma warning(disable : 4996)

extern "C" {
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>mathematics.h>
#include <libswscale></libswscale>swscale.h>
}

int main() {
 av_register_all();
 AVFormatContext* formatContext = nullptr;
 AVOutputFormat* outputFormat = nullptr;
 AVStream* videoStream = nullptr;

 const char* filename = "output.mp4";

 // Open the output file
 if (avformat_alloc_output_context2(&formatContext, nullptr, nullptr, filename) < 0) {
 fprintf(stderr, "Error allocating output format context\n");
 return -1;
 }

 outputFormat = formatContext->oformat;

 // Add a video stream
 videoStream = avformat_new_stream(formatContext, nullptr);
 if (!videoStream) {
 fprintf(stderr, "Error creating video stream\n");
 return -1;
 }

 // Set codec parameters, you may need to adjust these based on your needs
 AVCodecContext* codecContext = avcodec_alloc_context3(nullptr);
 codecContext->codec_id = outputFormat->video_codec;
 codecContext->codec_type = AVMEDIA_TYPE_VIDEO;
 codecContext->pix_fmt = AV_PIX_FMT_YUV420P;
 codecContext->width = 640;
 codecContext->height = 480;
 codecContext->time_base = { 1, 25 };

 // Open the video codec
 AVCodec* videoCodec = avcodec_find_encoder(codecContext->codec_id);
 if (!videoCodec) {
 fprintf(stderr, "Error finding video codec\n");
 return -1;
 }

 if (avcodec_open2(codecContext, videoCodec, nullptr) < 0) {
 fprintf(stderr, "Error opening video codec\n");
 return -1;
 }

 videoStream->codecpar->codec_id = codecContext->codec_id;
 videoStream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 videoStream->codecpar->format = codecContext->pix_fmt;
 videoStream->codecpar->width = codecContext->width;
 videoStream->codecpar->height = codecContext->height;

 if (avformat_write_header(formatContext, nullptr) < 0) {
 fprintf(stderr, "Error writing header\n");
 return -1;
 }

 // Create a frame
 AVFrame* frame = av_frame_alloc();
 frame->format = codecContext->pix_fmt;
 frame->width = codecContext->width;
 frame->height = codecContext->height;
 av_frame_get_buffer(frame, 32);

 // Fill the frame with red color
 for (int y = 0; y < codecContext->height; ++y) {
 for (int x = 0; x < codecContext->width; ++x) {
 frame->data[0][y * frame->linesize[0] + x * 3] = 255; // Red component
 frame->data[0][y * frame->linesize[0] + x * 3 + 1] = 0; // Green component
 frame->data[0][y * frame->linesize[0] + x * 3 + 2] = 0; // Blue component
 }
 }

 // Write video frames
 AVPacket packet;
 for (int i = 0; i < 100; ++i) {
 // Send the frame for encoding
 if (avcodec_send_frame(codecContext, frame) < 0) {
 fprintf(stderr, "Error sending a frame for encoding\n");
 return -1;
 }

 // Receive the encoded packet
 while (avcodec_receive_packet(codecContext, &packet) == 0) {
 // Write the packet to the output file
 if (av_write_frame(formatContext, &packet) != 0) {
 fprintf(stderr, "Error writing video frame\n");
 return -1;
 }
 av_packet_unref(&packet);
 }
 }

 // Write the trailer
 if (av_write_trailer(formatContext) != 0) {
 fprintf(stderr, "Error writing trailer\n");
 return -1;
 }

 // Clean up resources
 av_frame_free(&frame);
 avcodec_free_context(&codecContext);
 avformat_free_context(formatContext);

 return 0;
}



-
Reading FFmpeg bytes from named pipes, extracted NAL units are bad/corrupted
12 avril 2023, par Mr SquidrI'm trying to read .mp4 file wtih ffmpeg and read bytes from the named pipe which I then want to package to RTP stream and send those packets over WebRTC.


What I learned is that H264 video consists of many NAL units. So What I do in my code is read the bytes from the named pipe and try to extract NAL units. The problem is that the bytes I get seem to make no real sense as NAL unit start is sometimes only few bytes away.


I tested on multiple different mp4 files and on multiple h264 files, all have the same issues. Start of NAL units are found but they aren't separated properly, or what I'm reading aren't NAL units at all. For example NAL units start from reading a sample .h264 file would be : 4, 32, 41, 717. This does not make a lot of sense if these are NAL units, they are too close and some are far apart. I'm lost at what I'm doing wrong.


The issue might also be in the ffmpeg command itself. I do think I need "-c:v libx264 -bsf:v h264_mp4toannexb" arguments for the output to be in the correct format but I'm not certain.


I did try sending NAL units that seemed ok over webrtc but nothing was displayed on the receiving end (probably because of how H264 works by needing previous frames, I'm not sure).


I am struggling with this issue for past few days now and no matter what I tried NAL units were never as they should be.


Code to start ffmpeg process from c# :


var proc = new Process()
{
 StartInfo =
 {
 FileName = FFMPEG_LIB_PATH,
 Arguments = "-y -re -i input.mp4 -an -c:v libx264 -bsf:v h264_mp4toannexb -f image2pipe ffmpeg_rec_stream",
 UseShellExecute = false,
 CreateNoWindow = true,
 RedirectStandardInput = false,
 RedirectStandardOutput = true,
 }
};



Code to connect to named pipe :


var mOutputPipe = new NamedPipeServerStream($"ffmpeg_rec_stream", PipeDirection.InOut, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous, 102400, 102400);
mOutputPipe.BeginWaitForConnection(OnOutputPipeConnected, null);



Code for OnOutputPipeConnected


private void OnOutputPipeConnected(IAsyncResult ar)
 {
 try
 {
 mOutputPipe.EndWaitForConnection(ar);
 var buffer = new byte[65536];
 while (true)
 {
 int bytesRead = mOutputPipe.Read(buffer, 0, buffer.Length);
 if (bytesRead == 0)
 {
 break;
 }

 var nalUnitStarts = FindAllNalUnitIndexes(buffer, bytesRead);
 for (int i = 0; i < nalUnitStarts.Count - 1; i++)
 {
 int nalStartIndex = nalUnitStarts[i];
 int nalEndIndex = nalUnitStarts[i + 1] - 1;
 int nalLength = nalEndIndex - nalStartIndex + 1;
 byte[] nalUnit = new byte[nalLength];
 Buffer.BlockCopy(buffer, nalStartIndex, nalUnit, 0, nalLength);

 // send nalUnit over to webrtc client
 var rtpPacket = new RTPPacket(nalUnit);
 RecordingSession?.RTCPeer.SendRtpRaw(SDPMediaTypesEnum.video, rtpPacket.Payload, rtpPacket.Header.Timestamp, rtpPacket.Header.MarkerBit, 100);
 }
 }
 }
 catch (Exception e)
 {
 
 }
 }



Code for finding NAL units :


private static List<int> FindAllNalUnitIndexes(byte[] buffer, int length)
{
 var indexes = new List<int>();
 int i = 0;

 while (i < length - 4)
 {
 int nalStart = FindNextNalUnit(buffer, i, length);
 if (nalStart == -1)
 {
 break;
 }
 else
 {
 indexes.Add(nalStart);
 i = nalStart + 1;
 }
 }

 return indexes;
}

private static int FindNextNalUnit(byte[] buffer, int startIndex, int length)
{
 for (int i = startIndex; i < length - 4; i++)
 {
 if (buffer[i] == 0 && buffer[i + 1] == 0 && (buffer[i + 2] == 1 || (buffer[i + 2] == 0 && buffer[i + 3] == 1)))
 {
 return i + (buffer[i + 2] == 1 ? 3 : 4);
 }
 }
 return -1;
}
</int></int>


-
Android Camera Video frames decoding coming out distorted with horizontal lines
13 novembre 2018, par Iain StanfordI’ve been porting over the following Test Android example to run in a simple Xamarin Android project.
https://bigflake.com/mediacodec/ExtractMpegFramesTest_egl14.java.txt
I’m running a video captured by the camera (on the same device) through this pipeline but the PNGs I’m getting out the other end are distorted, I assume due to the minefield of Android Camera color spaces.
Here are the images I’m getting running a Camera Video through the pipeline...
Its hard to tell, but it ’kinda’ looks like it is a single line of the actual image, stretched across. But I honestly wouldn’t want to bank on that being the issue as it could be a red herring.
However, when I run a ’normal’ video that I grabbed online through the same pipeline, it works completely fine.
I used the first video found on here (the lego one) http://techslides.com/sample-webm-ogg-and-mp4-video-files-for-html5
And I get frames like this...
Checking out some of the ffmpeg probe data of the video, both this and my camera video have the same pixel format (pix_fmt=yuv420p) but there are differences in color_range.
The video that works has,
color_range=tv
color_space=bt709
color_transfer=bt709
color_primaries=bt709And the camera video just has...
color_range=unknown
color_space=unknown
color_transfer=unknown
color_primaries=unknownThe media format of the camera video appears to be in SemiPlanar YUV, the codec output gets updated to that at least. I get an OutputBuffersChanged message which sets the output buffer of the MediaCodec to the following,
{
mime=video/raw,
crop-top=0,
crop-right=639,
slice-height=480,
color-format=21,
height=480,
width=640,
what=1869968451,
crop-bottom=479,
crop-left=0,
stride=640
}I can also point the codec output to a TextureView as opposed to OpenGL surface, and just grab the Bitmap that way (obviously slower) and these frames look fine. So maybe its the OpenGL display of the raw codec output ? Does Android TextureView do its on decoding ?
Note - The reason I’m looking into all this is I have a need to try and run some form of image processing on a raw camera feed at as close to 30fps as possible. Obviously, this is not possible some devices, but recording a video at 30fps and then processing the video after the fact is a possible workaround I’m investigating. I’d rather try and process the image in OpenGL for the improved speed than taking each frame as a Bitmap from the TextureView output.
In researching this I’ve seen someone else with pretty much the exact same issue here How to properly save frames from mp4 as png files using ExtractMpegFrames.java ?
although he didn’t seem to have much luck finding out what might be going wrong.EDIT - FFMpeg Probe outputs for both videos...
Video that works - https://justpaste.it/484ec .
Video that fails - https://justpaste.it/55in0 .