
Recherche avancée
Médias (91)
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Echoplex (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Discipline (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Letting you (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
1 000 000 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
999 999 (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (100)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.
Sur d’autres sites (10383)
-
FFmpeg extracts different number of frames when using -filter_complex together with the split filter
14 juin 2016, par KonstantinI am fiddling with ffmpeg, extracting jpg pictures from videos. I am splitting the input stream into two output stream with -filter_complex, because I process my videos from direct http link (scarce free space on VPS), and I don’t want to read through the whole video twice (traffic quota is also scarce). Furthermore I need two series of pitcures, one for applying some filters (fps changing, scale, unsharp, crop, scale) and then selecting from them by naked eye, and the other series being untouched (expect fps changing, and cropping the black borders), using them for furter processing after selecting from the first series. I call my ffmpeg command from Ruby script, so it contains some string interpolation / substitution in the form #{}. My working command line looked like :
ffmpeg -y -fflags +genpts -loglevel verbose -i #{url} -filter_complex "[0:v]fps=fps=#{new_fps.round(5).to_s},split=2[in1][in2];[in1]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]},scale=#{thumb_width}:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(#{gammaval})[out1];[in2]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]}[out2]" -f #{format} -c copy #{options} -map_chapters -1 - -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
#options is set when output is MP4, then its value is "-movflags frag_keyframe+empty_moov" so I can send it to standard output without seeking capability and uploading the stream somewhere without making huge temporary video files.
So I get two series of pictures, one of them is filtered, sharpened, the other is in fact untouched. And I also get an output stream of the video on the standard output which is handled by Open3.popen3 library connecting the output stream of the input of two other commands.Problem arise when I would like to seek in the video to a given point and omitting the streamed video output on the STDOUT. I try to apply combined seeking, fast seek before the given time code and the slow seek to the exact time code, given in floating seconds :
ffmpeg -report -y -fflags +genpts -loglevel verbose -ss #{(seek_to-seek_before).to_s} -i #{url} -ss #{seek_before.to_s} -t #{t_duration.to_s} -filter_complex "[0:v]fps=fps=#{pics_per_sec},split=2[in1][in2];[in1]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]},scale=#{thumb_width}:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(#{gammaval})[out1];[in2]crop=iw-#{crop[0]+crop[2]}:ih-#{crop[1]+crop[3]}:#{crop[0]}:#{crop[1]}[out2]" -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
Running this command I get the needed two series of pictures, but they contains different number of images, 233 vs. 484.
Actual values can be read from this interpolated / substituted command line :
ffmpeg -report -y -fflags +genpts -loglevel verbose -ss 1619.0443599999999 -i fabf.avi -ss 50.0 -t 46.505879999999934 -filter_complex "[0:v]fps=fps=5,split=2[in1][in2];[in1]crop=iw-0:ih-0:0:0,scale=280:-1:flags=lanczos,unsharp,lutyuv=y=gammaval(0.526316)[out1];[in2]crop=iw-0:ih-0:0:0[out2]" -map '[out1]' -f image2 -q 1 %06d.jpg -map '[out2]' -f image2 -q 1 big_%06d.jpg
Detailed log can be found here : http://www.filefactory.com/file/1yih17k2hrmp/ffmpeg-20160610-223820.txt
Before last line it shows 188 duplicated frames.I also tried passing "-vsync 0" option, but didn’t help. When I generate the two series of images in two consecutive steps, with two different command lines, then no problem arises, I get same amount of pictures in both series of course. So my question would be, how can I use the later command line, generating the two series of images by only one reading / parsing of the remote video file ?
-
libavcodec audio decoding is producing garbeld samples
14 novembre 2024, par user28288805I'm trying to write a function that extracts the raw sample data from audio files. But when debugging with a test file I found that the samples in floating point planar format were not in range of -1.0f to 1.0f as specified in the documentation.


Here is the function :


AudioResource::ReturnCode AudioResource::LoadFromFile(std::string FilePath)
{
 std::string FileURL = "file:" + FilePath;
 AVFormatContext* FormatContext = nullptr;
 int Error = avformat_open_input(&FormatContext, FileURL.c_str(), nullptr, nullptr);

 if (Error < 0)
 {
 return ERROR_OPENING_FILE;
 }

 Error = avformat_find_stream_info(FormatContext, nullptr);
 if (Error < 0)
 {
 return ERROR_FINDING_STREAM_INFO;
 }

 int AudioStream = -1;
 AVCodecParameters* CodecParams;
 for (int i = 0; i < FormatContext->nb_streams; i++)
 {
 if (FormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
 {
 CodecParams = FormatContext->streams[i]->codecpar;
 AudioStream = i;
 }
 }

 if (AudioStream == -1)
 {
 return ERROR_AUDIO_STREAM_NOT_FOUND;
 }

 SampleRate = FormatContext->streams[AudioStream]->codecpar->sample_rate;
 AVSampleFormat SampleFormat = (AVSampleFormat) FormatContext->streams[AudioStream]->codecpar->format;
 Channels = FormatContext->streams[AudioStream]->codecpar->ch_layout.nb_channels;

 if (Channels > 2)
 {
 return ERROR_UNSUPPORTED_CHANNEL_COUNT;
 }

 switch (SampleFormat)
 {
 case AV_SAMPLE_FMT_NONE:
 return ERROR_UNKWON_SAMPLE_FORMAT;
 break;
 case AV_SAMPLE_FMT_U8:
 BytesPerSample = 1;
 if (Channels == 1)
 {
 SampleType == MONO_8BIT;
 }
 else
 {
 SampleType = STEREO_8BIT;
 }
 break;
 case AV_SAMPLE_FMT_S16:
 BytesPerSample = 2;
 if (Channels == 1)
 {
 SampleType = MONO_16BIT;
 }
 else
 {
 SampleType = STEREO_16BIT;
 }
 break;
 case AV_SAMPLE_FMT_S32:
 BytesPerSample = 2;
 if (Channels == 1)
 {
 SampleType = MONO_16BIT;
 }
 else
 {
 SampleType= STEREO_16BIT;
 }
 break;
 case AV_SAMPLE_FMT_FLT:
 BytesPerSample = 2;
 if (Channels == 1)
 {
 SampleType = MONO_16BIT;
 }
 else
 {
 SampleType = STEREO_16BIT;
 }
 break;
 case AV_SAMPLE_FMT_DBL:
 BytesPerSample = 2;
 if (Channels == 1)
 {
 SampleType = MONO_16BIT;
 }
 else
 {
 SampleType = STEREO_16BIT;
 }
 break;
 case AV_SAMPLE_FMT_U8P:
 BytesPerSample = 1;
 if (Channels == 1)
 {
 SampleType = MONO_8BIT;
 }
 else
 {
 SampleType = STEREO_8BIT;
 }
 break;
 case AV_SAMPLE_FMT_S16P:
 BytesPerSample = 2;
 if (Channels == 1)
 {
 SampleType = MONO_16BIT;
 }
 else
 {
 SampleType = STEREO_16BIT;
 }
 break;
 case AV_SAMPLE_FMT_S32P:
 BytesPerSample = 2;
 if (Channels == 1)
 {
 SampleType = MONO_16BIT;
 }
 else
 {
 SampleType = STEREO_16BIT;
 }
 break;
 case AV_SAMPLE_FMT_FLTP:
 BytesPerSample = 2;
 if (Channels == 1)
 {
 SampleType = MONO_16BIT;
 }
 else
 {
 SampleType = STEREO_16BIT;
 }
 break;
 case AV_SAMPLE_FMT_DBLP:
 BytesPerSample = 2;
 if (Channels == 1)
 {
 SampleType = MONO_16BIT;
 }
 else
 {
 SampleType = STEREO_16BIT;
 }
 break;
 case AV_SAMPLE_FMT_S64:
 BytesPerSample = 2;
 if (Channels == 1)
 {
 SampleType = MONO_16BIT;
 }
 else
 {
 SampleType = STEREO_16BIT;
 }
 break;
 case AV_SAMPLE_FMT_S64P:
 BytesPerSample = 2;
 if (Channels == 1)
 {
 SampleType = MONO_16BIT;
 }
 else
 {
 SampleType = STEREO_16BIT;
 }
 break;
 default:
 return ERROR_UNKWON_SAMPLE_FORMAT;
 break;
 }

 const AVCodec* AudioCodec = avcodec_find_decoder(CodecParams->codec_id);
 AVCodecContext* CodecContext = avcodec_alloc_context3(AudioCodec);
 avcodec_parameters_to_context(CodecContext, CodecParams);
 avcodec_open2(CodecContext, AudioCodec, nullptr);

 AVPacket* CurrentPacket = av_packet_alloc();
 AVFrame* CurrentFrame = av_frame_alloc();

 while (av_read_frame(FormatContext, CurrentPacket) >= 0)
 {
 avcodec_send_packet(CodecContext, CurrentPacket);
 for (;;)
 {
 Error = avcodec_receive_frame(CodecContext, CurrentFrame);
 if ((Error == AVERROR(EAGAIN)) || (Error == AVERROR_EOF))
 {
 break;
 }
 else if (Error == AVERROR(EINVAL))
 {
 return ERROR_RECIVING_FRAME;
 }
 else if (Error != 0)
 {
 return ERROR_UNEXSPECTED;
 }

 if (SampleFormat == AV_SAMPLE_FMT_U8)
 {

 }
 else if (SampleFormat == AV_SAMPLE_FMT_S16)
 {

 }
 else if (SampleFormat == AV_SAMPLE_FMT_S32)
 {

 }
 else if (SampleFormat == AV_SAMPLE_FMT_FLT)
 {

 }
 else if (SampleFormat == AV_SAMPLE_FMT_DBL)
 {

 }
 else if (SampleFormat == AV_SAMPLE_FMT_U8P)
 {

 }
 else if (SampleFormat == AV_SAMPLE_FMT_S16P)
 {

 }
 else if (SampleFormat == AV_SAMPLE_FMT_S32P)
 {

 }
 else if (SampleFormat == AV_SAMPLE_FMT_FLTP) //
 {
 if (Channels == 2)
 {
 for (size_t i = 0; i < CurrentFrame->linesize[0]; i += sizeof(float))
 {
 float CurrentLeftSample = 0.0f;
 float CurrentRightSample = 0.0f;
 memcpy(&CurrentLeftSample, &CurrentFrame->data[0][i], sizeof(float));
 memcpy(&CurrentRightSample, &CurrentFrame->data[1][i], sizeof(float));

 short int QuantizedLeftSample = roundf(CurrentLeftSample * 0x7fff);
 short int QuantizedRightSample = roundf(CurrentRightSample * 0x7fff);

 LoadByteData<short int="int">(QuantizedLeftSample, AudioData);
 LoadByteData<short int="int">(QuantizedRightSample, AudioData);
 }
 }
 else
 {

 }
 }
 else if (SampleFormat == AV_SAMPLE_FMT_DBLP)
 {

 }
 else if (SampleFormat == AV_SAMPLE_FMT_S64)
 {

 }
 else if (SampleFormat == AV_SAMPLE_FMT_S64P)
 {

 }
 else
 {
 return ERROR_UNEXSPECTED;
 }
 }
 }

 av_frame_free(&CurrentFrame);
 av_packet_free(&CurrentPacket);
 avcodec_free_context(&CodecContext);
 avformat_free_context(FormatContext);
 return OK;
}
</short></short>


Here is where I am reading from AVFrame's buffer


for (size_t i = 0; i < CurrentFrame->linesize[0]; i += sizeof(float))
 {
 float CurrentLeftSample = 0.0f;
 float CurrentRightSample = 0.0f;
 memcpy(&CurrentLeftSample, &CurrentFrame->data[0][i], sizeof(float));
 memcpy(&CurrentRightSample, &CurrentFrame->data[1][i], sizeof(float));

 short int QuantizedLeftSample = roundf(CurrentLeftSample * 0x7fff);
 short int QuantizedRightSample = roundf(CurrentRightSample * 0x7fff);

 LoadByteData<short int="int">(QuantizedLeftSample, AudioData);
 LoadByteData<short int="int">(QuantizedRightSample, AudioData);
 }
</short></short>


Ive tried using different parameters like
CurrentFrame->nb_samples
,CurrentFrame->buf[0].size
in the for loop with no success it still produces the same results.

Any help would be much appreciated.


-
stream_loop generate a big size video [FFMPEG]
26 mars 2021, par Mouaad Abdelghafour AITALII'm trying to loop a short video for e.g. 190 time to match the audio length, I use the following code :


-y -stream_loop 190 -i input.mp4 -c copy output.mp4



The command above works, but it generates video with huge size for video with 3min the size is 885 MB


2021-03-25 23:52:30.445 5687-6253/maa.abc.music_maker D/XXX: LOOPING VIDEO SIZE ===> 885.845MB



Or there's any way I can loop the video to match the audio length without using
-stream_loop