
Recherche avancée
Autres articles (63)
-
MediaSPIP Core : La Configuration
9 novembre 2010, parMediaSPIP Core fournit par défaut trois pages différentes de configuration (ces pages utilisent le plugin de configuration CFG pour fonctionner) : une page spécifique à la configuration générale du squelettes ; une page spécifique à la configuration de la page d’accueil du site ; une page spécifique à la configuration des secteurs ;
Il fournit également une page supplémentaire qui n’apparait que lorsque certains plugins sont activés permettant de contrôler l’affichage et les fonctionnalités spécifiques (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)
Sur d’autres sites (8562)
-
how to play audio from a video file in c#
10 août 2014, par Ivan LisovichFor read video file I use ffmpeg libraries(http://ffmpeg.zeranoe.com/builds/) build ffmpeg-2.2.3-win32-dev.7z.
manage c++ code for read video file :void VideoFileReader::Read( String^ fileName, System::Collections::Generic::List^ imageData, System::Collections::Generic::List^>^ audioData )
{
char *nativeFileName = ManagedStringToUnmanagedUTF8Char(fileName);
libffmpeg::AVFormatContext *pFormatCtx = NULL;
libffmpeg::AVCodec *pCodec = NULL;
libffmpeg::AVCodec *aCodec = NULL;
libffmpeg::av_register_all();
if(libffmpeg::avformat_open_input(&pFormatCtx, nativeFileName, NULL, NULL) != 0)
{
throw gcnew System::Exception( "Couldn't open file" );
}
if(libffmpeg::avformat_find_stream_info(pFormatCtx, NULL) < 0)
{
throw gcnew System::Exception( "Couldn't find stream information" );
}
libffmpeg::av_dump_format(pFormatCtx, 0, nativeFileName, 0);
int videoStream = libffmpeg::av_find_best_stream(pFormatCtx, libffmpeg::AVMEDIA_TYPE_VIDEO, -1, -1, &pCodec, 0);
int audioStream = libffmpeg::av_find_best_stream(pFormatCtx, libffmpeg::AVMEDIA_TYPE_AUDIO, -1, -1, &aCodec, 0);
if(videoStream == -1)
{
throw gcnew System::Exception( "Didn't find a video stream" );
}
if(audioStream == -1)
{
throw gcnew System::Exception( "Didn't find a audio stream" );
}
libffmpeg::AVCodecContext *aCodecCtx = pFormatCtx->streams[audioStream]->codec;
libffmpeg::avcodec_open2(aCodecCtx, aCodec, NULL);
m_channels = aCodecCtx->channels;
m_sampleRate = aCodecCtx->sample_rate;
m_bitsPerSample = aCodecCtx->bits_per_coded_sample;
libffmpeg::AVCodecContext *pCodecCtx = pFormatCtx->streams[videoStream]->codec;
if(libffmpeg::avcodec_open2(pCodecCtx, pCodec, NULL) < 0)
{
throw gcnew System::Exception( "Could not open codec" );
}
m_width = pCodecCtx->width;
m_height = pCodecCtx->height;
m_framesCount = pFormatCtx->streams[videoStream]->nb_frames;
if (pFormatCtx->streams[videoStream]->r_frame_rate.den == 0)
{
m_frameRate = 25;
}
else
{
m_frameRate = pFormatCtx->streams[videoStream]->r_frame_rate.num / pFormatCtx->streams[videoStream]->r_frame_rate.den;
if (m_frameRate == 0)
{
m_frameRate = 25;
}
}
libffmpeg::AVFrame *pFrame = libffmpeg::av_frame_alloc();
int numBytes = libffmpeg::avpicture_get_size(libffmpeg::PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);
libffmpeg::uint8_t *buffer = (libffmpeg::uint8_t *)libffmpeg::av_malloc(numBytes*sizeof(libffmpeg::uint8_t));
struct libffmpeg::SwsContext *sws_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, libffmpeg::PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL);
libffmpeg::AVPacket packet;
libffmpeg::AVFrame *filt_frame = libffmpeg::av_frame_alloc();
while(av_read_frame(pFormatCtx, &packet) >= 0)
{
if(packet.stream_index == videoStream)
{
System::Drawing::Bitmap ^bitmap = nullptr;
int frameFinished;
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if(frameFinished)
{
bitmap = gcnew System::Drawing::Bitmap( pCodecCtx->width, pCodecCtx->height, System::Drawing::Imaging::PixelFormat::Format24bppRgb );
System::Drawing::Imaging::BitmapData^ bitmapData = bitmap->LockBits( System::Drawing::Rectangle( 0, 0, pCodecCtx->width, pCodecCtx->height ), System::Drawing::Imaging::ImageLockMode::ReadOnly, System::Drawing::Imaging::PixelFormat::Format24bppRgb );
libffmpeg::uint8_t* ptr = reinterpret_cast( static_cast( bitmapData->Scan0 ) );
libffmpeg::uint8_t* srcData[4] = { ptr, NULL, NULL, NULL };
int srcLinesize[4] = { bitmapData->Stride, 0, 0, 0 };
libffmpeg::sws_scale( sws_ctx, (libffmpeg::uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, srcData, srcLinesize );
bitmap->UnlockBits( bitmapData );
}
imageData->Add(bitmap);
}
else if(packet.stream_index == audioStream)
{
int b = av_dup_packet(&packet);
if(b >= 0) {
int audio_pkt_size = packet.size;
libffmpeg::uint8_t* audio_pkt_data = packet.data;
while(audio_pkt_size > 0)
{
int got_frame = 0;
int len1 = libffmpeg::avcodec_decode_audio4(aCodecCtx, pFrame, &got_frame, &packet);
if(len1 < 0)
{
audio_pkt_size = 0;
break;
}
audio_pkt_data += len1;
audio_pkt_size -= len1;
if (got_frame)
{
int data_size = libffmpeg::av_samples_get_buffer_size ( NULL, aCodecCtx->channels, pFrame->nb_samples, aCodecCtx->sample_fmt, 1 );
array<byte>^ managedBuf = gcnew array<byte>(data_size);
System::IntPtr iptr = System::IntPtr( pFrame->data[0] );
System::Runtime::InteropServices::Marshal::Copy( iptr, managedBuf, 0, data_size );
audioData->Add(managedBuf);
}
}
}
}
libffmpeg::av_free_packet(&packet);
}
libffmpeg::av_free(buffer);
libffmpeg::av_free(pFrame);
libffmpeg::avcodec_close(pCodecCtx);
libffmpeg::avformat_close_input(&pFormatCtx);
delete [] nativeFileName;
}
</byte></byte>This function returns my images in imageData list and audio in audioData list ;
I normal draw image in my c# code, but i don’t playing audio data.
I try playing audio in NAudio library. But I hear crackle in speakers instead of sounds.
Code in c# playing audio :var WaveFormat = new WaveFormat(m_sampleRate, 16, m_channels)
var _waveProvider = new BufferedWaveProvider(WaveFormat) { DiscardOnBufferOverflow = true, BufferDuration = TimeSpan.FromMilliseconds(_fileReader.Length) };
var _waveOut = new DirectSoundOut();
_waveOut.Init(_waveProvider);
_waveOut.Play();
foreach (var data in audioData)
{
_waveProvider.AddSamples(data, 0, data.Length);
}What am I doing wrong ?
-
How to best decide what VM to use on google cloud ? Any best practices ? [closed]
2 juillet 2024, par Prabhjot KaurI have a script that reads google sheet for urls and then records those url videos, then merges it with my "test" video. both videos are about 3 minutes long. I am using e2-standard-8 Instance with ubuntu on it. Then running my script in node using puppeteer for recording and ffmpeg for merging videos. It takes 5 minutes for every video.


My question is that should I run concurrent processed and use a stronger VM that will complete it in lesser time, or should i use a slow one ? It doesnt have to run 24/7, because I only have to generate certain amount of videos every week.


Please provide the guidance that I need. Thanks in advance.


I tried creating instance with more CPUs with free credits and ran out with them fairly quickly. I wonder if there is some other service i could use that will make the process faster ?


-
Issues with video frame dropout using Accord.NET VideoFileWriter and FFMPEG
9 janvier 2018, par DavidI am testing out writing video files using the Accord.Video library. I have a WPF project created in Visual Studio 2017, and I have installed Accord.Video.FFMPEG as well as Accord.Video.VFW using Nuget, as well as their dependencies.
I have created a very simple video to test basic file output. However, I am running into some issues. My goal is to be able to output videos with a variable frame rate, because in the future I will be using this code to input images from a webcam device that will then be saved to a video file, and video from webcams typically has variable frame rates.
For now, in this example, I am not inputting video from a webcam, but rather I am generating a simple "moving box" image and outputting the frames to a video file. The box changes color every 20 frames : red, green, blue, yellow, and finally white. I also set the frame rate to be 20 fps.
When I use Accord.Video.VFW, the frame rate is correctly set, and all the frames are correctly outputted to the video file. The resulting video looks like this (see the YouTube link) : https://youtu.be/K8E9O7bJIbg
This is just a reference, however. I don’t intend on using Accord.Video.VFW because it outputs uncompressed data to an AVI file, and it doesn’t support variable frame rates. I would like to use Accord.Video.FFMPEG because it is supposed to support variable frame rates.
When I attempt to use the Accord.Video.FFMPEG library, however, the video does not result in how I would expect it to look. Here is a YouTube link : https://youtu.be/cW19yQFUsLI
As you can see, in that example, the box remains the first color for a longer amount of time than the other colors. It also never reaches the final color (white). When I inspect the video file, 100 frames were not outputted to the file. There are 69 or 73 frames typically. And the expected frame rate and duration obviously do not match up.
Here is the code that generates both these videos :
public MainWindow()
{
InitializeComponent();
Accord.Video.VFW.AVIWriter avi_writer = new Accord.Video.VFW.AVIWriter();
avi_writer.FrameRate = 20;
avi_writer.Open("test2.avi", 640, 480);
Accord.Video.FFMPEG.VideoFileWriter k = new Accord.Video.FFMPEG.VideoFileWriter();
k.FrameRate = 20;
k.Width = 640;
k.Height = 480;
k.Open("test.mp4");
for (int i = 0; i < 100; i++)
{
TimeSpan t = new TimeSpan(0, 0, 0, 0, 50 * i);
var b = new System.Drawing.Bitmap(640, 480);
var g = Graphics.FromImage(b);
var br = System.Drawing.Brushes.Blue;
if (t.TotalMilliseconds < 1000)
br = System.Drawing.Brushes.Red;
else if (t.TotalMilliseconds < 2000)
br = System.Drawing.Brushes.Green;
else if (t.TotalMilliseconds < 3000)
br = System.Drawing.Brushes.Blue;
else if (t.TotalMilliseconds < 4000)
br = System.Drawing.Brushes.Yellow;
else
br = System.Drawing.Brushes.White;
g.FillRectangle(br, 50 + i, 50, 100, 100);
System.Console.WriteLine("Frame: " + (i + 1).ToString() + ", Millis: " + t.TotalMilliseconds.ToString());
#region This is the code in question
k.WriteVideoFrame(b, t);
avi_writer.AddFrame(b);
#endregion
}
avi_writer.Close();
k.Close();
System.Console.WriteLine("Finished writing video");
}I have tried changing a few things under the assumption that maybe the "WriteVideoFrame" function isn’t able to finish in time, and so I need to slow down the program so it can complete itself. Under that assumption, I have replaced the "WriteVideoFrame" call with the following code :
Task taskA = new Task(() => k.WriteVideoFrame(b, t));
taskA.Start();
taskA.Wait();And I have tried the following code :
Task.WaitAll(
Task.Run( () =>
{
lock(syncObj)
{
k.WriteVideoFrame(b, t);
}
}
));And even just a standard call where I don’t specify a timestamp :
k.WriteVideoFrame(b);
None of these work. They all result in something similar.
Any suggestions on getting the WriteVideoFrame function to work that is a part of the Accord.Video.FFMPEG.VideoFileWriter class ?
Thanks for any and all help !
[edits below]
I have done some more investigating. I still haven’t found a good solution, but here is what I have found so far. After declaring my VideoFileWriter object, I have tried setting up some options for the video.
When I use an H264 codec with the following options, it correctly saves 100 frames at a frame-rate of 20 fps, however any normal media player (both VLC and Windows Media Player) end up playing a 10-second video instead of a 5-second video. Essentially, it seems like they play it at half-speed. Here is the code that gives that result :
k.VideoCodec = Accord.Video.FFMPEG.VideoCodec.H264;
k.VideoOptions["crf"] = "18";
k.VideoOptions["preset"] = "veryfast";
k.VideoOptions["tune"] = "zerolatency";
k.VideoOptions["x264opts"] = "no-mbtree:sliced-threads:sync-lookahead=0";Additionally, if I use an Mpeg4 codec, I get the same "half-speed" result :
k.VideoCodec = Accord.Video.FFMPEG.VideoCodec.Mpeg4;
However, if I use a WMV codec, then it correctly results in 100 frames at 20 fps, and a 5 second video that is correctly played by both media players :
k.VideoCodec = Accord.Video.FFMPEG.VideoCodec.Wmv1;
Although this is good news, this still doesn’t solve the problem because WMV doesn’t support variable frame rates. Also, this still doesn’t answer the question as to why the problem is happening in the first place.
As always, any help would be appreciated !