
Recherche avancée
Médias (1)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (28)
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (6087)
-
Concatenate multiple video files alongside delayed audio files
28 mars 2022, par Spartan 117I am currently working on a utility that is responsible for pulling audio and video files from the cloud and merging them together via FFMPEG. As I am new to FFMPEG, I am going to split the question into an FFMPEG part and a C# part just so people can answer either 1 part or the other (or both !).


FFMPEG Part


Currently, I have a working FFMPEG arg if there is only 1 video file present and it needs to be merged with multiple files.


ffmpeg -i input1.mkv -i input1.mka -i input2.mka -i input3.mka -i input4.mka -filter_complex "[1:a]adelay=0s:all=1[a1pad];[2:a]adelay=20s:all=1[a2pad];[3:a]adelay=30s:all=1[a3pad];[4:a]adelay=40s:all=1[a4pad];[a1pad][a2pad][a3pad][a4pad]amix=inputs=4:weights=1|1|1|1[aout]" -map [aout] -map 0:0 output4.mkv



The delays you see in there are determined by subtracting the start time of each file from the start time of the earliest created audio or video file. I know that if I wanted to create a horizontal stack of multiple videos, i could just do


ffmpeg -i input1.mkv -i input1.mka -i input2.mkv -i input2.mka -i input3.mka -i input4.mka
-filter_complex 
"[2:v]tpad=start_duration=120:color=black[vpad]; 
 [3:a]adelay=120000:all=1[a2pad]; 
 [4:a]adelay=180000:all=1[a3pad];
 [5:a]adelay=200000:all=1[a4pad]; 
 [0:v][vpad]hstack=inputs=2[vout]; 
 [1:a][a2pad][a3pad][a4pad]amix=inputs=4:weights=1|1|1|1[aout]" 
 -map [vout] -map [aout] 
 output.mkv



but what I want to do is both keep those delays for the audio and video files AND concatenate (not stack) those videos, how would i go about doing that ?


C# Part


You see that giant arg up there ? The utility is supposed to generate that based on a List of recordings. Here is the model.


List<filemodel> _records;
public class FileModel {
 public string Id { get; set; }
 public string FileType { get; set; }
 public string StartTime { get; set; }
}
</filemodel>


The utility has to then go through that list and create the arg (as seen in the FFMPEG part) to be executed by the Xabe.FFMPEG package. The way i was thinking to approach this is to basically create 2 string builders. 1 string builder will be responsible for dealing with the inputs, the other string builder. Here is what i have so far


private async Task CombineAsync()
 {
 var minTime = _records.Min(y => Convert.ToDateTime(y.StartTime));
 var frontBuilder = new StringBuilder("-y ");
 var middleBuilder = new StringBuilder("-filter_complex \"");
 var endString = $" -map [vout] -map [aout] {_folderPath}\\CombinedOutput.mkv";

 for (var i = 0; i < _records.Count; i++)
 {
 var type = _records[i].FileType.ToLower();
 var delay = (Convert.ToDateTime(_records[i].StartTime).Subtract(minTime)).TotalSeconds;
 frontBuilder.Append($"-i {_folderPath + "\\" + _records[i].Id} ");
 var addColon = i != _records.Count - 1 ? ";" : "";
 middleBuilder.Append(type.Equals("video") ? $"[{i}:v]tpad=start_duration={delay}:color=black[v{i}pad]{addColon} " : $"[{i}:a]adelay={delay}s:all=1[a{i}pad]{addColon} ");
 }
 middleBuilder.Append("\"");
 Console.WriteLine(frontBuilder.ToString() + middleBuilder.ToString() + endString);
 // var args = frontBuilder + middleBuilder + endString;
 // try
 // {
 // var conversionResult = await FFmpeg.Conversions.New().Start(args);
 // Console.WriteLine(JsonConvert.SerializeObject(conversionResult));
 // }
 // catch (Exception e)
 // {
 // Console.WriteLine(e);
 // }
 }



- 

-
Is this the correct way to go about building the argument out ?


-
How in god's name do i get something like this in there, since it relies on naming and total count for the piping and inputs=


[0:v][vpad]hstack=inputs=2[vout]; // This part will change for video concatenation depending on what gets answered above
 [1:a][a2pad][a3pad][a4pad]amix=inputs=4:weights=1|1|1|1[aout]









-
-
VP8 for Real-time Video Applications
15 février 2011, par noreply@blogger.com (John Luther)With the growing interest in videoconferencing on the web platform, it’s a good time to explore the features of VP8 that make it an exceptionally good codec for real-time applications like videoconferencing.
VP8 Design History & Features
Real-time applications were a primary use case when VP8 was designed. The VP8 encoder has features specifically engineered to overcome the challenges inherent in compressing and transmitting real-time video data.
- Processor-adaptive encoding. 16 encoder complexity levels automatically (or manually) adjust encoder features such as motion search strategy, quantizer optimizations, and loop filtering strength.
- Encoder can be configured to use a target percentage of the host CPU.
Ability to measure the time taken to encode each frame and adjust encoder complexity dynamically to keep the encoding time per frame constant - Robust error recovery (packet retransmission, forward error correction, recovery frame/new keyframe requests)
- Temporal scalability (i.e., a single video bitstream that can degrade as needed depending on a participant’s available bandwidth)
- Highly efficient decoding performance on low-power devices. Conventional video technology has grown to a state of complexity where dedicated hardware chips are needed to make it work well. With VP8, software-based solutions have proven to meet customer needs without requiring specialized hardware.
For a more information about real-time video features in VP8, see the slide presentation by WebM Project engineer Paul Wilkins (PDF file).
Commercially Available Products
Millions of people around the world have been using VP7/8 for video chat for years. VP8 is deployed in some of today’s most popular consumer videoconferencing applications, including Skype (group video calling), Sightspeed, ooVoo and Logitech Vid. All of these vendors are active WebM project supporters. VP8’s predecessor, VP7, has been used in Skype video calling since 2005 and is supported in the new Skype app for iPhone. Other real-time VP8 implementations are coming soon, including ooVoo, and VP8 will play a leading role in Google’s plans for real-time applications on the web platform.
Real-time applications will be extremely important as the web platform matures. The WebM community has made significant improvements in VP8 for real-time use cases since our launch and will continue to do so in the future.
John Luther is Product Manager of the WebM Project.
-
How to solve Accord.Video.FFMPEG memory leak problem
26 mai 2021, par mfwooI am developing a digital billboard application that allow customer to click on the touch screen to go back and forth.


Screen 0 -> touch -> Screen 1 -> touch -> Screen 2 -> time out -> Screen 0


If no interaction happens Screen 0 will loop indefinitely. Every Screen is running its own MP4 file.


However, for every running cycle of Screen 1, it gobbled up memory and in no time the application crash.


Is it because of VideoFileSource's video object is not being dispose properly or because of some threading problem in video_NewFrame ?


Because I get this error occasionally - "Invoke or BeginInvoke cannot be called on a control until the windows handle is created"


I am using VS2017 .NET Framework 4.5 with Accord.Video.FFMPEG by Accord.NET version 3.8


Screen 0 MP4 size - 5.5MB
Screen 1 MP4 size - 5.6MB
Screen 2 MP4 size - 7.0MB


Here is my code :-
...


Bitmap image;
VideoFileSource video;
int screenIdx = 0;
bool enableClicking = true;
bool isTimeOut = false;
string VideoPath = @"d:\KioskApp\Bkgrnd\"

public frmMain()
 {
 InitializeComponent(); 
 StartFirstScreen();
 tmrScreen01.Interval = 10000;
 tmrScreen02.Interval = 10000;
 }
 
 private void StartFirstScreen()
 {
 try
 {
 string fileName = VideoPath + Screen00();
 video = new VideoFileSource(fileName);
 video.PlayingFinished += new Accord.Video.PlayingFinishedEventHandler(video_Finished);
 video.NewFrame += new Accord.Video.NewFrameEventHandler(video_NewFrame);
 video.Start();
 screenIdx = 1;
 }
 catch (Exception ex)
 {
 string strErrMsg = strMsg + " - " + ex.Message;
 MessageBox.Show(strErrMsg);
 }
 }
 
 private void video_NewFrame(object sender, Accord.Video.NewFrameEventArgs eventArgs)
 {
 try
 {
 Invoke(new Action(() =>
 {
 System.Drawing.Image OldImage;
 OldImage = pictureBox1.Image;
 pictureBox1.Image = AForge.Imaging.Image.Clone(eventArgs.Frame);
 if (OldImage != null)
 OldImage.Dispose();
 })); 
 }
 catch (Exception ex)
 {
 var strErrMsg = "video_NewFrame - " + ex.Message;
 MessageBox.Show(strErrMsg);
 }
 }
 
 private void video_Finished(object sender, Accord.Video.ReasonToFinishPlaying reason)
 {
 try
 {
 if (screenIdx == 1)
 {
 video.PlayingFinished -= video_Finished;
 video.NewFrame -= video_NewFrame;
 video = null; 
 StartFirstScreen();
 return;
 }
 enableClicking = true;

 }
 catch (Exception ex)
 {
 var strErrMsg = "video_Finished - " + ex.Message;
 MessageBox.Show(strErrMsg);

 }
 }
 
 void startLastScreen()
 {
 string fileName = string.Empty;
 video.SignalToStop();
 fileName = VideoPath + Screen02();
 screenIdx = 0;
 if (object.ReferenceEquals(null, video))
 {
 video = new VideoFileSource(fileName);
 }
 else
 {
 video = null;
 video = new VideoFileSource(fileName);
 }

 video.PlayingFinished += new Accord.Video.PlayingFinishedEventHandler(video_Finished);
 video.NewFrame += new Accord.Video.NewFrameEventHandler(video_NewFrame);
 video.Start();
 enableClicking = false;
 }
 
 private void pictureBox1_Click(object sender, EventArgs e)
 {
 if (!enableClicking && screenIdx != 1) return;

 tmrScreen01.Stop();
 tmrScreen02.Stop();
 
 // Check clickable area before allow to proceed to the next screen 
 string fileName = string.Empty;
 video.SignalToStop();
 video.Stop();

 if (screenIdx == 0)
 {
 fileName = VideoPath + Screen00();
 screenIdx = 1;
 }
 else if (screenIdx == 1)
 {
 fileName = VideoPath + Screen01();
 screenIdx = 2;
 
 }
 else if (screenIdx == 2)
 {
 fileName = VideoPath + Screen02();
 screenIdx = 0;
 
 }

 if (object.ReferenceEquals(null, video))
 {
 video = new VideoFileSource(fileName);
 }
 else
 {
 video = null;
 video = new VideoFileSource(fileName);
 }
 video.PlayingFinished += new Accord.Video.PlayingFinishedEventHandler(video_Finished);
 video.NewFrame += new Accord.Video.NewFrameEventHandler(video_NewFrame);
 enableClicking = false;
 isTimeOut = false;
 video.Start();
 }



...