
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (47)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Soumettre améliorations et plugins supplémentaires
10 avril 2011Si vous avez développé une nouvelle extension permettant d’ajouter une ou plusieurs fonctionnalités utiles à MediaSPIP, faites le nous savoir et son intégration dans la distribution officielle sera envisagée.
Vous pouvez utiliser la liste de discussion de développement afin de le faire savoir ou demander de l’aide quant à la réalisation de ce plugin. MediaSPIP étant basé sur SPIP, il est également possible d’utiliser le liste de discussion SPIP-zone de SPIP pour (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (8671)
-
Java - RTSP save snapshot from Stream Packets
9 août 2016, par Guerino RodellaI’m developing an application which requests snapshots to DVR and IP Cameras. The device I’m working on just offer RTSP requests to do so. Then I implemented the necessary RTSP methods to start receiving the stream packets and I started receiving then via UDP connection established. My doubt is, how can I save the received data to a jpeg file ? Where’s the begging and end of the image bytes received ?
I searched a lot libraries which implement this type of service in Java, like Xuggler ( which it’s maintained no more ), javacpp-presets - has ffmpeg and opencv libraries included - I had some environment problems with it. If someone know an easy and good one which saves snapshots from the streams, let me know.
My code :
final long timeout = System.currentTimeMillis() + 3000;
byte[] fullImage = new byte[ 1024 * 1024 ];
DatagramSocket udpSocket = new DatagramSocket( 8000 );
int lastByte = 0;
// Skip first 2 packets because I think they are HEADERS
// Since I don't know what they mean, I just print then in hexa
for( int i = 0; i < 2; i++ ){
byte[] buffer = new byte[ 1024 ];
DatagramPacket dataPacket = new DatagramPacket( buffer, buffer.length );
udpSocket.receive( dataPacket );
int dataLenght = dataPacket.getLength();
buffer = Arrays.copyOf( buffer, dataLenght );
System.out.println( "RECEIVED[" + DatatypeConverter.printHexBinary( buffer ) + " L: " + dataLenght );
}
do{
byte[] buffer = new byte[ 1024 ];
DatagramPacket dataPacket = new DatagramPacket( fullImage, fullImage.length );
udpSocket.receive( dataPacket );
System.out.println( "RECEIVED: " + new String( fullImage ) );
for( int i = 0; i < buffer.length; i++ ){
fullImage[ i + lastByte ] = buffer[ i ];
lastByte ++;
}
} while( System.currentTimeMillis() < timeout );
// I know this timeout is wrong, I should stop after getting full image bytesThe output :
RECEIVED : 80E0000100004650000000006742E01FDA014016C4 L : 21
RECEIVED : 80E00002000046500000000068CE30A480 L : 17
RECEIVED : Tons of data from the streaming...
RECEIVED : Tons of data from the streaming...
RECEIVED : Tons of data from the streaming...
[...]As you might suppose, the image I’m saving into a file is not readable because I’m doing it wrong. I think the header provide me some info about the next packets the server will sent me telling the start and the end of the image from the streaming. But I don’t understood them. Someone know how to solve it ? Any tips are welcome !
-
Save a stream of arrays to video using FFMPEG
13 décembre 2022, par Gianluca IacchiniI made a simple fluid simulation using CUDA, and I'm trying to save it to a video using FFMPEG, however I get the
Finishing stream 0:0 without any data written to it
warning.

This is how i send the data


unsigned char* data = new unsigned char[SCR_WIDTH * SCR_HEIGHT * 4];
uchar4* pColors = new uchar4[SCR_WIDTH * SCR_HEIGHT];

for (int i = 0; i < N_FRAMES; i ++)
{
 // Computes a simulation step and sets pColors with the correct values.
 on_frame(pColors, timeStepSize);
 for (int j = 0; j < SCR_WIDTH * SCR_HEIGHT * 4; j+=4)
 {
 data[j] = pColors[j].x;
 data[j+1] = pColors[j].y;
 data[j+2] = pColors[j].z;
 data[j+3] = pColors[j].w;
 }
 std::cout.write(reinterpret_cast(data), SCR_WIDTH * SCR_HEIGHT * 4);
}



And then I pass it to FFMPEG using the following command :


./simulation.o | ffmpeg -y -f rawvideo -pixel_format rgba -video_size 1024x1024 -i - -c:v libx264 -pix_fmt yuv444p -crf 0 video.mp4

`

This works fine if I hard code the values (es. if I set
data[j] = 255
I get a red screen as expected) but when I use thepColors
variable I get the following message from FFMPEG

Finishing stream 0:0 without any data written to it.


Even though both
pColors
anddata
hold the correct values.

Here is the full report from FFMPEG


ffmpeg started on 2022-12-13 at 14:28:34
Report written to "ffmpeg-20221213-142834.log"
Command line:
ffmpeg -y -f rawvideo -report -pixel_format rgba -video_size 128x128 -i - -c:v libx264 -pix_fmt yuv444p -crf 0 video9.mp4
ffmpeg version 3.4.11-0ubuntu0.1 Copyright (c) 2000-2022 the FFmpeg developers
 built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
 configuration: --prefix=/usr --extra-version=0ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chrom libavutil 55. 78.100 / 55. 78.100
 libavcodec 57.107.100 / 57.107.100
 libavformat 57. 83.100 / 57. 83.100
 libavdevice 57. 10.100 / 57. 10.100
 libavfilter 6.107.100 / 6.107.100
 libavresample 3. 7. 0 / 3. 7. 0
 libswscale 4. 8.100 / 4. 8.100
 libswresample 2. 9.100 / 2. 9.100
 libpostproc 54. 7.100 / 54. 7.100
Splitting the commandline.
Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'rawvideo'.
Reading option '-report' ... matched as option 'report' (generate a report) with argument '1'.
Reading option '-pixel_format' ... matched as AVOption 'pixel_format' with argument 'rgba'.
Reading option '-video_size' ... matched as AVOption 'video_size' with argument '128x128'.
Reading option '-i' ... matched as input url with argument '-'.
Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'libx264'.
Reading option '-pix_fmt' ... matched as option 'pix_fmt' (set pixel format) with argument 'yuv444p'.
Reading option '-crf' ... matched as AVOption 'crf' with argument '0'.
Reading option 'video9.mp4' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option y (overwrite output files) with argument 1.
Applying option report (generate a report) with argument 1.
Successfully parsed a group of options.
Parsing a group of options: input url -.
Applying option f (force format) with argument rawvideo.
Successfully parsed a group of options.
Opening an input file: -.
[rawvideo @ 0x558eba7b0000] Opening 'pipe:' for reading
[pipe @ 0x558eba78a080] Setting default whitelist 'crypto'
[rawvideo @ 0x558eba7b0000] Before avformat_find_stream_info() pos: 0 bytes read:0 seeks:0 nb_streams:1
[rawvideo @ 0x558eba7b0000] After avformat_find_stream_info() pos: 0 bytes read:0 seeks:0 frames:0
Input #0, rawvideo, from 'pipe:':
 Duration: N/A, bitrate: 13107 kb/s
 Stream #0:0, 0, 1/25: Video: rawvideo (RGBA / 0x41424752), rgba, 128x128, 13107 kb/s, 25 tbr, 25 tbn, 25 tbc
Successfully opened the file.
Parsing a group of options: output url video9.mp4.
Applying option c:v (codec name) with argument libx264.
Applying option pix_fmt (set pixel format) with argument yuv444p.
Successfully parsed a group of options.
Opening an output file: video9.mp4.
[file @ 0x558eba78a200] Setting default whitelist 'file,crypto'
Successfully opened the file.
Stream mapping:
 Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
cur_dts is invalid (this is harmless if it occurs once at the start per stream)
No more output streams to write to, finishing.
Finishing stream 0:0 without any data written to it.
detected 2 logical cores
[graph 0 input from stream 0:0 @ 0x558eba7a4a00] Setting 'video_size' to value '128x128'
[graph 0 input from stream 0:0 @ 0x558eba7a4a00] Setting 'pix_fmt' to value '28'
[graph 0 input from stream 0:0 @ 0x558eba7a4a00] Setting 'time_base' to value '1/25'
[graph 0 input from stream 0:0 @ 0x558eba7a4a00] Setting 'pixel_aspect' to value '0/1'
[graph 0 input from stream 0:0 @ 0x558eba7a4a00] Setting 'sws_param' to value 'flags=2'
[graph 0 input from stream 0:0 @ 0x558eba7a4a00] Setting 'frame_rate' to value '25/1'
[graph 0 input from stream 0:0 @ 0x558eba7a4a00] w:128 h:128 pixfmt:rgba tb:1/25 fr:25/1 sar:0/1 sws_param:flags=2
[format @ 0x558eba7a4b40] compat: called with args=[yuv444p]
[format @ 0x558eba7a4b40] Setting 'pix_fmts' to value 'yuv444p'
[auto_scaler_0 @ 0x558eba7a4be0] Setting 'flags' to value 'bicubic'
[auto_scaler_0 @ 0x558eba7a4be0] w:iw h:ih flags:'bicubic' interl:0
[format @ 0x558eba7a4b40] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_null_0' and the filter 'format'
[AVFilterGraph @ 0x558eba76d500] query_formats: 4 queried, 2 merged, 1 already done, 0 delayed
[auto_scaler_0 @ 0x558eba7a4be0] w:128 h:128 fmt:rgba sar:0/1 -> w:128 h:128 fmt:yuv444p sar:0/1 flags:0x4
[libx264 @ 0x558eba7cf900] using mv_range_thread = 24
[libx264 @ 0x558eba7cf900] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 AVX512
[libx264 @ 0x558eba7cf900] profile High 4:4:4 Predictive, level 1.1, 4:4:4 8-bit
[libx264 @ 0x558eba7cf900] 264 - core 152 r2854 e9a5903 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=7 psy=0 mixed_ref=1 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadzone=21,11 fast_pskip=0 chroma_qp_offset=0 threads=3 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc=cqp mbtree=0 qp=0
Output #0, mp4, to 'video9.mp4':
 Metadata:
 encoder : Lavf57.83.100
 Stream #0:0, 0, 1/12800: Video: h264 (libx264) (avc1 / 0x31637661), yuv444p, 128x128, q=-1--1, 25 fps, 12800 tbn, 25 tbc
 Metadata:
 encoder : Lavc57.107.100 libx264
 Side data:
 cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
frame= 0 fps=0.0 q=0.0 Lsize= 0kB time=00:00:00.00 bitrate=N/A speed= 0x 
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Input file #0 (pipe:):
 Input stream #0:0 (video): 0 packets read (0 bytes); 0 frames decoded; 
 Total: 0 packets (0 bytes) demuxed
Output file #0 (video9.mp4):
 Output stream #0:0 (video): 0 frames encoded; 0 packets muxed (0 bytes); 
 Total: 0 packets (0 bytes) muxed
0 frames successfully decoded, 0 decoding errors
[AVIOContext @ 0x558eba7b4120] Statistics: 2 seeks, 3 writeouts
[AVIOContext @ 0x558eba7b4000] Statistics: 0 bytes read, 0 seeks




I've never used FFMPEG before so I'm having a hard time finding my mistake.


-
What's wrong with how I save a vector of AVFrames as mp4 video using the h264 encoder ?
8 avril 2023, par noklaI am trying to encode a vector of AVFrames to an MP4 file using the h264 codec.


The code runs without errors but both when I try to open the saved video file with the windows media and adobe Media Encoded they it says that it is in an unsupported format.


I went through it with a debugger and everything seemed to work fine.



This is the function I used to saved the video :


void SaveVideo(std::string& output_filename, std::vector<avframe> video)
{
 // Initialize FFmpeg
 avformat_network_init();

 // Open the output file context
 AVFormatContext* format_ctx = nullptr;
 int ret = avformat_alloc_output_context2(&format_ctx, nullptr, nullptr, output_filename.c_str());
 if (ret < 0) {
 wxMessageBox("Error creating output context: ");
 wxMessageBox(av_err2str(ret));
 return;
 }

 // Open the output file
 ret = avio_open(&format_ctx->pb, output_filename.c_str(), AVIO_FLAG_WRITE);
 if (ret < 0) {
 std::cerr << "Error opening output file: " << av_err2str(ret) << std::endl;
 avformat_free_context(format_ctx);
 return;
 }

 // Create the video stream
 const AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_H264);
 if (!codec) {
 std::cerr << "Error finding H.264 encoder" << std::endl;
 avformat_free_context(format_ctx);
 return;
 }

 AVStream* stream = avformat_new_stream(format_ctx, codec);
 if (!stream) {
 std::cerr << "Error creating output stream" << std::endl;
 avformat_free_context(format_ctx);
 return;
 }

 // Set the stream parameters
 stream->codecpar->codec_id = AV_CODEC_ID_H264;
 stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 stream->codecpar->width =video.front().width;
 stream->codecpar->height = video.front().height;
 stream->codecpar->format = AV_PIX_FMT_YUV420P;
 stream->codecpar->bit_rate = 400000;
 AVRational framerate = { 1, 30};
 stream->time_base = av_inv_q(framerate);

 // Open the codec context
 AVCodecContext* codec_ctx = avcodec_alloc_context3(codec);
 codec_ctx->codec_tag = 0;
 codec_ctx->time_base = stream->time_base;
 codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 if (!codec_ctx) {
 std::cout << "Error allocating codec context" << std::endl;
 avformat_free_context(format_ctx);
 return;
 }

 ret = avcodec_parameters_to_context(codec_ctx, stream->codecpar);
 if (ret < 0) {
 std::cout << "Error setting codec context parameters: " << av_err2str(ret) << std::endl;
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }
 AVDictionary* opt = NULL;
 ret = avcodec_open2(codec_ctx, codec, &opt);
 if (ret < 0) {
 wxMessageBox("Error opening codec: ");
 wxMessageBox(av_err2str(ret));
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 // Allocate a buffer for the frame data
 AVFrame* frame = av_frame_alloc();
 if (!frame) {
 std::cerr << "Error allocating frame" << std::endl;
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 frame->format = codec_ctx->pix_fmt;
 frame->width = codec_ctx->width;
 frame->height = codec_ctx->height;

 ret = av_frame_get_buffer(frame, 0);
 if (ret < 0) {
 std::cerr << "Error allocating frame buffer: " << av_err2str(ret) << std::endl;
 av_frame_free(&frame);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 // Allocate a buffer for the converted frame data
 AVFrame* converted_frame = av_frame_alloc();
 if (!converted_frame) {
 std::cerr << "Error allocating converted frame" << std::endl;
 av_frame_free(&frame);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 converted_frame->format = AV_PIX_FMT_YUV420P;
 converted_frame->width = codec_ctx->width;
 converted_frame->height = codec_ctx->height;

 ret = av_frame_get_buffer(converted_frame, 0);
 if (ret < 0) {
 std::cerr << "Error allocating converted frame buffer: " << av_err2str(ret) << std::endl;
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 // Initialize the converter
 SwsContext* converter = sws_getContext(
 codec_ctx->width, codec_ctx->height, codec_ctx->pix_fmt,
 codec_ctx->width, codec_ctx->height, AV_PIX_FMT_YUV420P,
 SWS_BICUBIC, nullptr, nullptr, nullptr
 );
 if (!converter) {
 std::cerr << "Error initializing converter" << std::endl;
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 // Write the header to the output file
 ret = avformat_write_header(format_ctx, nullptr);
 if (ret < 0) {
 std::cerr << "Error writing header to output file: " << av_err2str(ret) << std::endl;
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 // Iterate over the frames and write them to the output file
 int frame_count = 0;
 for (auto& frame: video) {
 {
 // Convert the frame to the output format
 sws_scale(converter,
 srcFrame.data, srcFrame.linesize, 0, srcFrame.height,
 converted_frame->data, converted_frame->linesize
 );

 // Set the frame properties
 converted_frame->pts = av_rescale_q(frame_count, stream->time_base, codec_ctx->time_base);
 frame_count++;
 //converted_frame->time_base.den = codec_ctx->time_base.den;
 //converted_frame->time_base.num = codec_ctx->time_base.num;
 // Encode the frame and write it to the output
 ret = avcodec_send_frame(codec_ctx, converted_frame);
 if (ret < 0) {
 std::cerr << "Error sending frame for encoding: " << av_err2str(ret) << std::endl;
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }
 AVPacket* pkt = av_packet_alloc();
 if (!pkt) {
 std::cerr << "Error allocating packet" << std::endl;
 return;
 }
 while (ret >= 0) {
 ret = avcodec_receive_packet(codec_ctx, pkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 std::string a = av_err2str(ret);
 break;
 }
 else if (ret < 0) {
 wxMessageBox("Error during encoding");
 wxMessageBox(av_err2str(ret));
 av_packet_unref(pkt);
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 // Write the packet to the output file
 av_packet_rescale_ts(pkt, codec_ctx->time_base, stream->time_base);
 pkt->stream_index = stream->index;
 ret = av_interleaved_write_frame(format_ctx, pkt);
 av_packet_unref(pkt);
 if (ret < 0) {
 std::cerr << "Error writing packet to output file: " << av_err2str(ret) << std::endl;
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }
 }
 }
 }

 // Flush the encoder
 ret = avcodec_send_frame(codec_ctx, nullptr);
 if (ret < 0) {
 std::cerr << "Error flushing encoder: " << av_err2str(ret) << std::endl;
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 while (ret >= 0) {
 AVPacket* pkt = av_packet_alloc();
 if (!pkt) {
 std::cerr << "Error allocating packet" << std::endl;
 return;
 }
 ret = avcodec_receive_packet(codec_ctx, pkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 wxMessageBox("Error recieving packet");
 wxMessageBox(av_err2str(ret));
 break;
 }
 else if (ret < 0) {
 std::cerr << "Error during encoding: " << av_err2str(ret) << std::endl;
 av_packet_unref(pkt);
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 // Write the packet to the output file
 av_packet_rescale_ts(pkt, codec_ctx->time_base, stream->time_base);
 pkt->stream_index = stream->index;
 ret = av_interleaved_write_frame(format_ctx, pkt);
 av_packet_unref(pkt);
 if (ret < 0) {
 std::cerr << "Error writing packet to output file: " << av_err2str(ret) << std::endl;
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }
 }

 // Write the trailer to the output file
 ret = av_write_trailer(format_ctx);
 if (ret < 0) {
 std::cerr << "Error writing trailer to output file: " << av_err2str(ret) << std::endl;
 }

 // Free all resources
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
}

</avframe>


** I know it is not the prettiest way to write this code, I just wanted to try and do something like that.


** This is an altered version of the function as the original one was inside class. I changed it so you could compile it, but it might has some errors if I forgot to change something


Any help would be appreciated.