
Recherche avancée
Médias (1)
-
Spitfire Parade - Crisis
15 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
Autres articles (55)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Ajouter notes et légendes aux images
7 février 2011, parPour pouvoir ajouter notes et légendes aux images, la première étape est d’installer le plugin "Légendes".
Une fois le plugin activé, vous pouvez le configurer dans l’espace de configuration afin de modifier les droits de création / modification et de suppression des notes. Par défaut seuls les administrateurs du site peuvent ajouter des notes aux images.
Modification lors de l’ajout d’un média
Lors de l’ajout d’un média de type "image" un nouveau bouton apparait au dessus de la prévisualisation (...)
Sur d’autres sites (7322)
-
On upload rename the file to input.mp4 not working
7 septembre 2014, par T.A.R.D.I.S_wolfOk so i’m trying to make it to were the file is uploaded then renamed to input.mp4 so that it over rights the old one. But its not working ?
But it still keeps uploading with the original file name.
Or is there a way to just delete the files in the folder uploads after 30 min ? Both the uploaded file and output file.<?php
$fileName = $_FILES["file1"]["name"]; // The file name
$fileTmpLoc = $_FILES["file1"]["tmp_name"]; // File in the PHP tmp folder
$fileType = $_FILES["file1"]["type"]; // The type of file it is
$fileSize = $_FILES["file1"]["size"]; // File size in bytes
$fileErrorMsg = $_FILES["file1"]["error"]; // 0 for false... and 1 for true
if (!$fileTmpLoc) { // if file not chosen
echo "ERROR: Please browse for a file before clicking the upload button.";
exit();
}
$newfilename = $fileName . input.mp4';
rename($fileName, $newfilename);
if(move_uploaded_file($fileTmpLoc, "uploads/$newfilename")){
echo "Starting ffmpeg... <br />";
echo shell_exec("ffmpeg -y -i uploads/".$newfilename." uploads/output.mp3");
echo "Done. <br /><br />";
echo '<a href="http://test.tw-wcs.com:82/ffmpeg/MP4_To_MP3/uploads/output.mp3" target="_blank">Click Here</a> To open your file in a new tab.';
} else {
echo "move_uploaded_file function failed";
}
?> -
Concatenate multiple video files alongside delayed audio files
28 mars 2022, par Spartan 117I am currently working on a utility that is responsible for pulling audio and video files from the cloud and merging them together via FFMPEG. As I am new to FFMPEG, I am going to split the question into an FFMPEG part and a C# part just so people can answer either 1 part or the other (or both !).


FFMPEG Part


Currently, I have a working FFMPEG arg if there is only 1 video file present and it needs to be merged with multiple files.


ffmpeg -i input1.mkv -i input1.mka -i input2.mka -i input3.mka -i input4.mka -filter_complex "[1:a]adelay=0s:all=1[a1pad];[2:a]adelay=20s:all=1[a2pad];[3:a]adelay=30s:all=1[a3pad];[4:a]adelay=40s:all=1[a4pad];[a1pad][a2pad][a3pad][a4pad]amix=inputs=4:weights=1|1|1|1[aout]" -map [aout] -map 0:0 output4.mkv



The delays you see in there are determined by subtracting the start time of each file from the start time of the earliest created audio or video file. I know that if I wanted to create a horizontal stack of multiple videos, i could just do


ffmpeg -i input1.mkv -i input1.mka -i input2.mkv -i input2.mka -i input3.mka -i input4.mka
-filter_complex 
"[2:v]tpad=start_duration=120:color=black[vpad]; 
 [3:a]adelay=120000:all=1[a2pad]; 
 [4:a]adelay=180000:all=1[a3pad];
 [5:a]adelay=200000:all=1[a4pad]; 
 [0:v][vpad]hstack=inputs=2[vout]; 
 [1:a][a2pad][a3pad][a4pad]amix=inputs=4:weights=1|1|1|1[aout]" 
 -map [vout] -map [aout] 
 output.mkv



but what I want to do is both keep those delays for the audio and video files AND concatenate (not stack) those videos, how would i go about doing that ?


C# Part


You see that giant arg up there ? The utility is supposed to generate that based on a List of recordings. Here is the model.


List<filemodel> _records;
public class FileModel {
 public string Id { get; set; }
 public string FileType { get; set; }
 public string StartTime { get; set; }
}
</filemodel>


The utility has to then go through that list and create the arg (as seen in the FFMPEG part) to be executed by the Xabe.FFMPEG package. The way i was thinking to approach this is to basically create 2 string builders. 1 string builder will be responsible for dealing with the inputs, the other string builder. Here is what i have so far


private async Task CombineAsync()
 {
 var minTime = _records.Min(y => Convert.ToDateTime(y.StartTime));
 var frontBuilder = new StringBuilder("-y ");
 var middleBuilder = new StringBuilder("-filter_complex \"");
 var endString = $" -map [vout] -map [aout] {_folderPath}\\CombinedOutput.mkv";

 for (var i = 0; i < _records.Count; i++)
 {
 var type = _records[i].FileType.ToLower();
 var delay = (Convert.ToDateTime(_records[i].StartTime).Subtract(minTime)).TotalSeconds;
 frontBuilder.Append($"-i {_folderPath + "\\" + _records[i].Id} ");
 var addColon = i != _records.Count - 1 ? ";" : "";
 middleBuilder.Append(type.Equals("video") ? $"[{i}:v]tpad=start_duration={delay}:color=black[v{i}pad]{addColon} " : $"[{i}:a]adelay={delay}s:all=1[a{i}pad]{addColon} ");
 }
 middleBuilder.Append("\"");
 Console.WriteLine(frontBuilder.ToString() + middleBuilder.ToString() + endString);
 // var args = frontBuilder + middleBuilder + endString;
 // try
 // {
 // var conversionResult = await FFmpeg.Conversions.New().Start(args);
 // Console.WriteLine(JsonConvert.SerializeObject(conversionResult));
 // }
 // catch (Exception e)
 // {
 // Console.WriteLine(e);
 // }
 }



- 

-
Is this the correct way to go about building the argument out ?


-
How in god's name do i get something like this in there, since it relies on naming and total count for the piping and inputs=


[0:v][vpad]hstack=inputs=2[vout]; // This part will change for video concatenation depending on what gets answered above
 [1:a][a2pad][a3pad][a4pad]amix=inputs=4:weights=1|1|1|1[aout]









-
-
Understanding PTS and DTS in video frames
8 août 2015, par theateistI had fps issues when transcoding from avi to mp4(x264). Eventually the problem was in PTS and DTS values, so lines 12-15 where added before av_interleaved_write_frame function :
1. AVFormatContext* outContainer = NULL;
2. avformat_alloc_output_context2(&outContainer, NULL, "mp4", "c:\\test.mp4";
3. AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264);
4. AVStream *outStream = avformat_new_stream(outContainer, encoder);
5. // outStream->codec initiation
6. // ...
7. avformat_write_header(outContainer, NULL);
8. // reading and decoding packet
9. // ...
10. avcodec_encode_video2(outStream->codec, &encodedPacket, decodedFrame, &got_frame)
11.
12. if (encodedPacket.pts != AV_NOPTS_VALUE)
13. encodedPacket.pts = av_rescale_q(encodedPacket.pts, outStream->codec->time_base, outStream->time_base);
14. if (encodedPacket.dts != AV_NOPTS_VALUE)
15. encodedPacket.dts = av_rescale_q(encodedPacket.dts, outStream->codec->time_base, outStream->time_base);
16.
17. av_interleaved_write_frame(outContainer, &encodedPacket)After reading many posts I still do not understand :
outStream->codec->time_base
= 1/25 andoutStream->time_base
= 1/12800. The 1st one was set by me but I cannot figure out why and who set 12800 ? I noticed that before line (7)outStream->time_base
= 1/90000 and right after it it changes to 1/12800, why ?
When I transcode from avi to avi, meaning changing the line (2) toavformat_alloc_output_context2(&outContainer, NULL, "avi", "c:\\test.avi";
, so before and after line (7)outStream->time_base
remains always 1/25 and not like in mp4 case, why ?- What is the difference between time_base of
outStream->codec
andoutStream
? - To calc the pts
av_rescale_q
does : takes 2 time_base, multiplies their fractions in cross and then compute the pts. Why it does this in this way ? As I debugged, theencodedPacket.pts
has value incremental by 1, so why changing it if it does has value ? - At the beginning the dts value is -2 and after each rescaling it still has negative number, but despite this the video played correctly ! Shouldn’t it be positive ?