
Recherche avancée
Autres articles (76)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (8683)
-
Encoding YUV420p uncompressed video that comes from custom IO instead of file. (C++ code)
24 avril 2023, par devprogI am trying to encode YUV420p raw data that being streamed live from a private memory implementation. It didn't work. As a preliminary stage to private memory reading, and after some searches, I tried to read Custom IO input according ffmpeg example. This didn't work either. So, for my tests I created YUV file using


ffmpeg -f lavfi -i testsrc=duration=10:size=1920x1080:rate=30 1920x1080p30_10s.mp4



create a mp4 file. As a note, it turns out that it creates mp4 file with yuv444 color system. I changed it to yuv420p with


ffmpeg -i 1920x1080p30_10s.mp4 -c:v libx264 -pix_fmt yuv420p 1920x1080p30_10s_420p.mp4



then create a yuv file with


ffmpeg -i 1920x1080p30_10s_420p.mp4 -c:v rawvideo -pixel_format yuv420p output_1920x1080_420p.yuv



Test the output_1920x1080_420p.yuv with ffplay, looks good. This file will be my input to custom IO operation.


Then wrote a sample code as follows (based on several links) :
avio_reading ffmpeg example.
Creating Custom FFmpeg IO-Context


Code fail with the function


if ((ret = avformat_open_input(&pAVFormatContext, "IO", NULL, &options_in) < 0) )
 {
 printf ("err=%d\n", ret);
 av_strerror(ret, errbuf, sizeof(errbuf));
 fprintf(stderr, "Unable to open err=%s\n", errbuf);
 return false;
 }



Some notes :


- 

- I added the code, the compile command & the output
- I tried use with option parameters to avformat_open_input() & without, it failed.
- I tried to use with seek callback to avio_alloc_context() & without, it failed.
- I tries to use instead of YUV some mp4 file and looks as running (didn't fail) - BUT it is NOT what I need. I Need a YUV input










Can anyone help ? Is it even possible ?


My following code compiles with :


g++ ffmpeg_custom_io.cpp -lavformat -lavdevice -lavcodec -lavutil -lswscale -lswresample -lpthread -pthread -o ffmpeg_custom_io



and whole code is as follows :


using namespace std;

#include 
#include 
#include 
#include <iostream>

extern "C" {
 #include <libavformat></libavformat>avformat.h>
 #include <libavcodec></libavcodec>avcodec.h>
 
 #include <libavutil></libavutil>opt.h>
 #include <libavutil></libavutil>imgutils.h>
 #include <libavutil></libavutil>file.h>
}


struct buffer_data {
 uint8_t * bd_ptr;
 size_t bd_size; ///< size left in the buffer
};

int readFileBuffer(void *opaque, uint8_t *buf, int buf_size)
{

 printf("Start reading custom IO\n");
 struct buffer_data *bd = (struct buffer_data *)opaque;
 buf_size = FFMIN(buf_size, bd->bd_size);

 printf("min buf size:%d\n", buf_size);

 if (!buf_size) {
 printf("return -1, size = %d\n", buf_size);
 return -1;
 }
 printf("ptr:%p size:%zu\n", bd->bd_ptr, bd->bd_size);

 /* copy internal buffer data to buf */
 memcpy(buf, bd->bd_ptr, buf_size);
 bd->bd_ptr += buf_size;
 bd->bd_size -= buf_size;
 printf("End reading custom IO\n");
 return buf_size;

}

int64_t seekFunction(void* opaque, int64_t offset, int whence)
{
 std::cout << "SEEK " << std::endl;
 if (whence == AVSEEK_SIZE)
 return -1; // I don't know "size of my handle in bytes"
}

static void encode(AVCodecContext *enc_ctx, AVFrame *frame, AVPacket *pkt,
 FILE *outfile)
{
 int ret;

 /* send the frame to the encoder */
 if (frame)
 printf("Send frame %3" PRId64 "\n", frame->pts);

 ret = avcodec_send_frame(enc_ctx, frame);
 if (ret < 0) {
 fprintf(stderr, "Error sending a frame for encoding\n");
 exit(1);
 }

 while (ret >= 0) {
 ret = avcodec_receive_packet(enc_ctx, pkt);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 return;
 else if (ret < 0) {
 fprintf(stderr, "Error during encoding\n");
 exit(1);
 }

 printf("Write packet %3" PRId64 " (size=%5d)\n", pkt->pts, pkt->size);
 fwrite(pkt->data, 1, pkt->size, outfile);
 av_packet_unref(pkt);
 }
}

int main(int argc, char **argv)
{

 av_log_set_level(AV_LOG_MAX_OFFSET);

 AVFormatContext* pAVFormatContext;
 uint8_t *buffer = NULL, *avio_ctx_buffer = NULL;
 size_t buffer_size, avio_ctx_buffer_size = 32768;

 int ret = 0;
 char errbuf[100];

 // Alloc format context
 //--------------------
 if ( !(pAVFormatContext = avformat_alloc_context()) )
 {
 ret = AVERROR(ENOMEM);
 std::cout << "error=" << ret << " Fail to allocate avformat_alloc_context()\n";
 return false;
 }

 // Copy from ffmpeg example - bd buffer
 //--------------------
 struct buffer_data bd = { 0, 0 };

 // slurp file content into buffer
 ret = av_file_map("output_1920x1080_420p.yuv", &buffer, &buffer_size, 0, NULL);
 if (ret < 0)
 {
 std::cout << "error=" << ret << " Fail to allocate avformat_alloc_context()\n";
 return false;
 }

 // fill opaque structure used by the AVIOContext read callback
 bd.bd_ptr = buffer;
 bd.bd_size = buffer_size;

 // Prepare AVIO context
 //--------------------
 avio_ctx_buffer = static_cast(av_malloc(avio_ctx_buffer_size + AV_INPUT_BUFFER_PADDING_SIZE));

 if (!avio_ctx_buffer) {
 ret = AVERROR(ENOMEM);
 std::cout << "error=" << ret << " Fail to allocate av_malloc()\n";
 return false;
 }

 AVIOContext* pAVIOContext = avio_alloc_context(avio_ctx_buffer,
 avio_ctx_buffer_size,
 0,
 &bd,
 readFileBuffer,
 NULL,
 seekFunction);

 if (!pAVIOContext) {
 ret = AVERROR(ENOMEM);
 std::cout << "error=" << ret << " Fail to allocate avio_alloc_context()\n";
 return false;
 }

 pAVFormatContext->pb = pAVIOContext;
 pAVFormatContext->flags = AVFMT_FLAG_CUSTOM_IO; // - NOT SURE ABOUT THIS FLAG

 // Handle ffmpeg input
 //--------------------
 AVDictionary* options_in = NULL;
 //av_dict_set(&options_in, "framerate", "30", 0);
 av_dict_set(&options_in, "video_size", "1920x1080", 0);
 av_dict_set(&options_in, "pixel_format", "yuv420p", 0);
 av_dict_set(&options_in, "vcodec", "rawvideo", 0);

 if ((ret = avformat_open_input(&pAVFormatContext, "IO", NULL, &options_in) < 0) )
 {
 printf ("err=%d\n", ret);
 av_strerror(ret, errbuf, sizeof(errbuf));
 fprintf(stderr, "Unable to open err=%s\n", errbuf);
 return false;
 }

 //Raw video doesn't contain any stream information.
 //if ((ret = avformat_find_stream_info(pAVFormatContext, 0)) < 0) {
 // fprintf(stderr, "Failed to retrieve input stream information");
 //}

 AVCodec* pAVCodec = avcodec_find_decoder(AV_CODEC_ID_RAWVIDEO); //Get pointer to rawvideo codec.

 AVCodecContext* pAVCodecContext = avcodec_alloc_context3(pAVCodec); //Allocate codec context.

 //Fill the codec context based on the values from the codec parameters.
 AVStream *vid_stream = pAVFormatContext->streams[0];
 avcodec_parameters_to_context(pAVCodecContext, vid_stream->codecpar);

 avcodec_open2(pAVCodecContext, pAVCodec, NULL); //Open the codec

 //Allocate memory for packet and frame
 AVPacket* pAVPacketIn = av_packet_alloc();
 AVFrame* pAVFrameOut = av_frame_alloc();

 /* find the mpeg1video encoder */
 AVCodec* pAVCodecOut = avcodec_find_encoder_by_name("libx264");
 if (!pAVCodecOut) {
 fprintf(stderr, "Codec '%s' not found\n", "libx264");
 exit(1);
 }

 AVCodecContext* pAVCodecContextOut = avcodec_alloc_context3(pAVCodecOut);
 if (!pAVCodecContextOut) {
 fprintf(stderr, "Could not allocate video codec context\n");
 exit(1);
 }

 AVPacket* pAVPacketOut = av_packet_alloc();
 if (!pAVPacketOut)
 exit(1);

 /* put sample parameters */
 pAVCodecContextOut->bit_rate = 2000000;
 /* resolution must be a multiple of two */
 pAVCodecContextOut->width = 1280;
 pAVCodecContextOut->height = 720;
 /* frames per second */
 pAVCodecContextOut->time_base = (AVRational){1, 30};
 pAVCodecContextOut->framerate = (AVRational){30, 1};

 /* emit one intra frame every ten frames
 * check frame pict_type before passing frame
 * to encoder, if frame->pict_type is AV_PICTURE_TYPE_I
 * then gop_size is ignored and the output of encoder
 * will always be I frame irrespective to gop_size
 */
 pAVCodecContextOut->gop_size = 10;
 pAVCodecContextOut->max_b_frames = 0;
 pAVCodecContextOut->pix_fmt = AV_PIX_FMT_YUV420P;

 if (pAVCodecOut->id == AV_CODEC_ID_H264)
 av_opt_set(pAVCodecContextOut->priv_data, "preset", "slow", 0);

 /* open it */
 ret = avcodec_open2(pAVCodecContextOut, pAVCodecOut, NULL);
 if (ret < 0) {
 //fprintf(stderr, "Could not open codec: %s\n", av_err2str(ret));
 exit(1);
 }

 FILE* filename_out = fopen("out_vid_h264.mp4", "wb");
 if (!filename_out) {
 fprintf(stderr, "Could not open %s\n", "out_vid_h264.mp4");
 exit(1);
 }

 pAVFrameOut->format = pAVCodecContextOut->pix_fmt;
 pAVFrameOut->width = pAVCodecContextOut->width;
 pAVFrameOut->height = pAVCodecContextOut->height;


 uint8_t endcode_suf[] = { 0, 0, 1, 0xb7 };
 int i = 0;
 //Read video frames and pass through the decoder.
 //Note: Since the video is rawvideo, we don't really have to pass it through the decoder.
 while (av_read_frame(pAVFormatContext, pAVPacketIn) >= 0)
 {
 //The video is not encoded - passing through the decoder is simply copy the data.
 avcodec_send_packet(pAVCodecContext, pAVPacketIn); //Supply raw packet data as input to the "decoder".
 avcodec_receive_frame(pAVCodecContext, pAVFrameOut); //Return decoded output data from the "decoder".

 fflush(stdout);

 pAVFrameOut->pts = i++;

 /* encode the image */
 encode(pAVCodecContextOut, pAVFrameOut, pAVPacketOut, filename_out);
 }

 /* flush the encoder */
 encode(pAVCodecContextOut, NULL, pAVPacketOut, filename_out);

 /* Add sequence end code to have a real MPEG file.
 It makes only sense because this tiny examples writes packets
 directly. This is called "elementary stream" and only works for some
 codecs. To create a valid file, you usually need to write packets
 into a proper file format or protocol; see mux.c.
 */
 if (pAVCodecOut->id == AV_CODEC_ID_MPEG1VIDEO || pAVCodecOut->id == AV_CODEC_ID_MPEG2VIDEO)
 fwrite(endcode_suf, 1, sizeof(endcode_suf), filename_out);
 fclose(filename_out);


 return 0;
}
</iostream>


The output


./ffmpeg_custom_io 
Start reading custom IO
min buf size:32768
ptr:0x7f236a200000 size:933120000
End reading custom IO
Probing adp score:25 size:2048
Probing adp score:25 size:4096
Probing adp score:25 size:8192
Probing adp score:25 size:16384
Start reading custom IO
min buf size:32768
ptr:0x7f236a208000 size:933087232
End reading custom IO
Start reading custom IO
min buf size:65536
ptr:0x7f236a210000 size:933054464
End reading custom IO
Start reading custom IO
min buf size:131072
ptr:0x7f236a220000 size:932988928
End reading custom IO
Start reading custom IO
min buf size:262144
ptr:0x7f236a240000 size:932857856
End reading custom IO
Start reading custom IO
min buf size:524288
ptr:0x7f236a280000 size:932595712
End reading custom IO
err=1
Unable to open err=Error number 1 occurred




I tried all as noted above expecting avformat_open_input() will succeed, but it failed.


-
FFMPEG — Create New Audio File using aselect filter API call in C code
17 juin 2021, par AfricanMambaI have been working on an a program in C that invokes the use of FFMPEG's libraries to remove some segments of audio from a given audio file. I have created the filter successfully using the API call,
const AVFilter* aselect = avfilter_get_by_name("aselect")
, linking it with abuffersrc and abuffersink.

However, I am not sure entirely sure what to pass in to the arguments when creating the filter because when I pass in the argument string,
"'between(t,10,32)', asetpts = N/SR/TB output.wav"
, it gives me an error saying :"Invalid chars ',asetpts=N/SR/TBoutput.wav' at the end of expression"


But when I pass in just
"'between(t,10,32)'"
as the arguments for the filter, the program compiles and runs but no output file is created with just the audio from 10 seconds to 32 seconds. Does anyone happen to know if it is even possible to use the aselect filter API to create new audio files or if there is an example that exists showing how to copy only frames from one audio file to another ? Thanks !

-
I've download ffmpeg source code,but I don't know how to use it on anroid
23 octobre 2013, par Sravanthi PasaragondaI've download ffmpeg source code,but I don't know how to use it on anroid.I need to remove audio from vedio file and i need to add other audio to vedio.This is my Task .I am using ubutu 10.04 ,And i have installed NDK-r9,ffmpeg and yasm 1-2.0 in my system.
please any sample process to how to start ffmpeg example,i have downloaded lot of example from GITHUB Likehttps://github.com/appunite/AndroidFFmpeg
http://www.roman10.net/how-to-build-android-applications-based-on-ffmpeg-by-an-example/
https://github.com/mconf/android-ffmpeg
But i dont no how to complie all projects using ffmpeg.Please give me step by step process for any ffmpeg example in ubutu.
Thanks