
Recherche avancée
Autres articles (112)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (14984)
-
Java - RTSP save snapshot from Stream Packets
9 août 2016, par Guerino RodellaI’m developing an application which requests snapshots to DVR and IP Cameras. The device I’m working on just offer RTSP requests to do so. Then I implemented the necessary RTSP methods to start receiving the stream packets and I started receiving then via UDP connection established. My doubt is, how can I save the received data to a jpeg file ? Where’s the begging and end of the image bytes received ?
I searched a lot libraries which implement this type of service in Java, like Xuggler ( which it’s maintained no more ), javacpp-presets - has ffmpeg and opencv libraries included - I had some environment problems with it. If someone know an easy and good one which saves snapshots from the streams, let me know.
My code :
final long timeout = System.currentTimeMillis() + 3000;
byte[] fullImage = new byte[ 1024 * 1024 ];
DatagramSocket udpSocket = new DatagramSocket( 8000 );
int lastByte = 0;
// Skip first 2 packets because I think they are HEADERS
// Since I don't know what they mean, I just print then in hexa
for( int i = 0; i < 2; i++ ){
byte[] buffer = new byte[ 1024 ];
DatagramPacket dataPacket = new DatagramPacket( buffer, buffer.length );
udpSocket.receive( dataPacket );
int dataLenght = dataPacket.getLength();
buffer = Arrays.copyOf( buffer, dataLenght );
System.out.println( "RECEIVED[" + DatatypeConverter.printHexBinary( buffer ) + " L: " + dataLenght );
}
do{
byte[] buffer = new byte[ 1024 ];
DatagramPacket dataPacket = new DatagramPacket( fullImage, fullImage.length );
udpSocket.receive( dataPacket );
System.out.println( "RECEIVED: " + new String( fullImage ) );
for( int i = 0; i < buffer.length; i++ ){
fullImage[ i + lastByte ] = buffer[ i ];
lastByte ++;
}
} while( System.currentTimeMillis() < timeout );
// I know this timeout is wrong, I should stop after getting full image bytesThe output :
RECEIVED : 80E0000100004650000000006742E01FDA014016C4 L : 21
RECEIVED : 80E00002000046500000000068CE30A480 L : 17
RECEIVED : Tons of data from the streaming...
RECEIVED : Tons of data from the streaming...
RECEIVED : Tons of data from the streaming...
[...]As you might suppose, the image I’m saving into a file is not readable because I’m doing it wrong. I think the header provide me some info about the next packets the server will sent me telling the start and the end of the image from the streaming. But I don’t understood them. Someone know how to solve it ? Any tips are welcome !
-
What's wrong with how I save a vector of AVFrames as mp4 video using the h264 encoder ?
8 avril 2023, par noklaI am trying to encode a vector of AVFrames to an MP4 file using the h264 codec.


The code runs without errors but both when I try to open the saved video file with the windows media and adobe Media Encoded they it says that it is in an unsupported format.


I went through it with a debugger and everything seemed to work fine.



This is the function I used to saved the video :


void SaveVideo(std::string& output_filename, std::vector<avframe> video)
{
 // Initialize FFmpeg
 avformat_network_init();

 // Open the output file context
 AVFormatContext* format_ctx = nullptr;
 int ret = avformat_alloc_output_context2(&format_ctx, nullptr, nullptr, output_filename.c_str());
 if (ret < 0) {
 wxMessageBox("Error creating output context: ");
 wxMessageBox(av_err2str(ret));
 return;
 }

 // Open the output file
 ret = avio_open(&format_ctx->pb, output_filename.c_str(), AVIO_FLAG_WRITE);
 if (ret < 0) {
 std::cerr << "Error opening output file: " << av_err2str(ret) << std::endl;
 avformat_free_context(format_ctx);
 return;
 }

 // Create the video stream
 const AVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_H264);
 if (!codec) {
 std::cerr << "Error finding H.264 encoder" << std::endl;
 avformat_free_context(format_ctx);
 return;
 }

 AVStream* stream = avformat_new_stream(format_ctx, codec);
 if (!stream) {
 std::cerr << "Error creating output stream" << std::endl;
 avformat_free_context(format_ctx);
 return;
 }

 // Set the stream parameters
 stream->codecpar->codec_id = AV_CODEC_ID_H264;
 stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
 stream->codecpar->width =video.front().width;
 stream->codecpar->height = video.front().height;
 stream->codecpar->format = AV_PIX_FMT_YUV420P;
 stream->codecpar->bit_rate = 400000;
 AVRational framerate = { 1, 30};
 stream->time_base = av_inv_q(framerate);

 // Open the codec context
 AVCodecContext* codec_ctx = avcodec_alloc_context3(codec);
 codec_ctx->codec_tag = 0;
 codec_ctx->time_base = stream->time_base;
 codec_ctx->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
 if (!codec_ctx) {
 std::cout << "Error allocating codec context" << std::endl;
 avformat_free_context(format_ctx);
 return;
 }

 ret = avcodec_parameters_to_context(codec_ctx, stream->codecpar);
 if (ret < 0) {
 std::cout << "Error setting codec context parameters: " << av_err2str(ret) << std::endl;
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }
 AVDictionary* opt = NULL;
 ret = avcodec_open2(codec_ctx, codec, &opt);
 if (ret < 0) {
 wxMessageBox("Error opening codec: ");
 wxMessageBox(av_err2str(ret));
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 // Allocate a buffer for the frame data
 AVFrame* frame = av_frame_alloc();
 if (!frame) {
 std::cerr << "Error allocating frame" << std::endl;
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 frame->format = codec_ctx->pix_fmt;
 frame->width = codec_ctx->width;
 frame->height = codec_ctx->height;

 ret = av_frame_get_buffer(frame, 0);
 if (ret < 0) {
 std::cerr << "Error allocating frame buffer: " << av_err2str(ret) << std::endl;
 av_frame_free(&frame);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 // Allocate a buffer for the converted frame data
 AVFrame* converted_frame = av_frame_alloc();
 if (!converted_frame) {
 std::cerr << "Error allocating converted frame" << std::endl;
 av_frame_free(&frame);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 converted_frame->format = AV_PIX_FMT_YUV420P;
 converted_frame->width = codec_ctx->width;
 converted_frame->height = codec_ctx->height;

 ret = av_frame_get_buffer(converted_frame, 0);
 if (ret < 0) {
 std::cerr << "Error allocating converted frame buffer: " << av_err2str(ret) << std::endl;
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 // Initialize the converter
 SwsContext* converter = sws_getContext(
 codec_ctx->width, codec_ctx->height, codec_ctx->pix_fmt,
 codec_ctx->width, codec_ctx->height, AV_PIX_FMT_YUV420P,
 SWS_BICUBIC, nullptr, nullptr, nullptr
 );
 if (!converter) {
 std::cerr << "Error initializing converter" << std::endl;
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 // Write the header to the output file
 ret = avformat_write_header(format_ctx, nullptr);
 if (ret < 0) {
 std::cerr << "Error writing header to output file: " << av_err2str(ret) << std::endl;
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 // Iterate over the frames and write them to the output file
 int frame_count = 0;
 for (auto& frame: video) {
 {
 // Convert the frame to the output format
 sws_scale(converter,
 srcFrame.data, srcFrame.linesize, 0, srcFrame.height,
 converted_frame->data, converted_frame->linesize
 );

 // Set the frame properties
 converted_frame->pts = av_rescale_q(frame_count, stream->time_base, codec_ctx->time_base);
 frame_count++;
 //converted_frame->time_base.den = codec_ctx->time_base.den;
 //converted_frame->time_base.num = codec_ctx->time_base.num;
 // Encode the frame and write it to the output
 ret = avcodec_send_frame(codec_ctx, converted_frame);
 if (ret < 0) {
 std::cerr << "Error sending frame for encoding: " << av_err2str(ret) << std::endl;
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }
 AVPacket* pkt = av_packet_alloc();
 if (!pkt) {
 std::cerr << "Error allocating packet" << std::endl;
 return;
 }
 while (ret >= 0) {
 ret = avcodec_receive_packet(codec_ctx, pkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 std::string a = av_err2str(ret);
 break;
 }
 else if (ret < 0) {
 wxMessageBox("Error during encoding");
 wxMessageBox(av_err2str(ret));
 av_packet_unref(pkt);
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 // Write the packet to the output file
 av_packet_rescale_ts(pkt, codec_ctx->time_base, stream->time_base);
 pkt->stream_index = stream->index;
 ret = av_interleaved_write_frame(format_ctx, pkt);
 av_packet_unref(pkt);
 if (ret < 0) {
 std::cerr << "Error writing packet to output file: " << av_err2str(ret) << std::endl;
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }
 }
 }
 }

 // Flush the encoder
 ret = avcodec_send_frame(codec_ctx, nullptr);
 if (ret < 0) {
 std::cerr << "Error flushing encoder: " << av_err2str(ret) << std::endl;
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 while (ret >= 0) {
 AVPacket* pkt = av_packet_alloc();
 if (!pkt) {
 std::cerr << "Error allocating packet" << std::endl;
 return;
 }
 ret = avcodec_receive_packet(codec_ctx, pkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
 wxMessageBox("Error recieving packet");
 wxMessageBox(av_err2str(ret));
 break;
 }
 else if (ret < 0) {
 std::cerr << "Error during encoding: " << av_err2str(ret) << std::endl;
 av_packet_unref(pkt);
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }

 // Write the packet to the output file
 av_packet_rescale_ts(pkt, codec_ctx->time_base, stream->time_base);
 pkt->stream_index = stream->index;
 ret = av_interleaved_write_frame(format_ctx, pkt);
 av_packet_unref(pkt);
 if (ret < 0) {
 std::cerr << "Error writing packet to output file: " << av_err2str(ret) << std::endl;
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
 return;
 }
 }

 // Write the trailer to the output file
 ret = av_write_trailer(format_ctx);
 if (ret < 0) {
 std::cerr << "Error writing trailer to output file: " << av_err2str(ret) << std::endl;
 }

 // Free all resources
 av_frame_free(&frame);
 av_frame_free(&converted_frame);
 sws_freeContext(converter);
 avcodec_free_context(&codec_ctx);
 avformat_free_context(format_ctx);
}

</avframe>


** I know it is not the prettiest way to write this code, I just wanted to try and do something like that.


** This is an altered version of the function as the original one was inside class. I changed it so you could compile it, but it might has some errors if I forgot to change something


Any help would be appreciated.


-
Is there a way to save the thumbnail extracted by ffmpeg in the cache ? [closed]
28 septembre 2022, par L KIs there a way to save the thumbnail extracted by ffmpeg in the cache ?


We are building a simple web video editing site.
I'm going to use a thumbnail while making a timeline