
Recherche avancée
Médias (3)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
-
Creativecommons informational flyer
16 mai 2011, par
Mis à jour : Juillet 2013
Langue : English
Type : Texte
Autres articles (88)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Organiser par catégorie
17 mai 2013, parDans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (10717)
-
On-demand and seamless transcoding of individual HLS segments
5 janvier 2024, par Omid AriyanBackground


I've been meaning to implement on-demand transcoding of certain video formats such as ".mkv", ".wmv", ".mov", etc. in order to serve them on a media management server using ASP.NET Core 6.0, C# and ffmpeg.


My Approach


The approach I've decided to use is to serve a dynamically generated .m3u8 file which is simply generated using a segment duration of choice e.g. 10s and the known video duration. Here's how I've done it. Note that the resolution is currently not implemented and discarded :


public string GenerateVideoOnDemandPlaylist(double duration, int segment)
{
 double interval = (double)segment;
 var content = new StringBuilder();

 content.AppendLine("#EXTM3U");
 content.AppendLine("#EXT-X-VERSION:6");
 content.AppendLine(String.Format("#EXT-X-TARGETDURATION:{0}", segment));
 content.AppendLine("#EXT-X-MEDIA-SEQUENCE:0");
 content.AppendLine("#EXT-X-PLAYLIST-TYPE:VOD");
 content.AppendLine("#EXT-X-INDEPENDENT-SEGMENTS");

 for (double index = 0; (index * interval) < duration; index++)
 {
 content.AppendLine(String.Format("#EXTINF:{0:#.000000},", ((duration - (index * interval)) > interval) ? interval : ((duration - (index * interval)))));
 content.AppendLine(String.Format("{0:00000}.ts", index));
 }

 content.AppendLine("#EXT-X-ENDLIST");

 return content.ToString();
}

[HttpGet]
[Route("stream/{id}/{resolution}.m3u8")]
public IActionResult Stream(string id, string resolution)
{
 double duration = RetrieveVideoLengthInSeconds();
 return Content(GenerateVideoOnDemandPlaylist(duration, 10), "application/x-mpegURL", Encoding.UTF8);
}



Here's an example of how the .m3u8 file looks like :


#EXTM3U
#EXT-X-VERSION:6
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-INDEPENDENT-SEGMENTS
#EXTINF:10.000000,
00000.ts
#EXTINF:3.386667,
00001.ts
#EXT-X-ENDLIST



So the player would ask for 00000.ts, 00001.ts, etc. and the next step is to have them generated on demand :


public byte[] GenerateVideoOnDemandSegment(int index, int duration, string path)
{
 int timeout = 30000;
 int totalWaitTime = 0;
 int waitInterval = 100;
 byte[] output = Array.Empty<byte>();
 string executable = "/opt/homebrew/bin/ffmpeg";
 DirectoryInfo temp = Directory.CreateDirectory(System.IO.Path.Combine(System.IO.Path.GetTempPath(), System.IO.Path.GetRandomFileName()));
 string format = System.IO.Path.Combine(temp.FullName, "output-%05d.ts");

 using (Process ffmpeg = new())
 {
 ffmpeg.StartInfo.FileName = executable;

 ffmpeg.StartInfo.Arguments = String.Format("-ss {0} ", index * duration);
 ffmpeg.StartInfo.Arguments += String.Format("-y -t {0} ", duration);
 ffmpeg.StartInfo.Arguments += String.Format("-i \"{0}\" ", path);
 ffmpeg.StartInfo.Arguments += String.Format("-c:v libx264 -c:a aac ");
 ffmpeg.StartInfo.Arguments += String.Format("-segment_time {0} -reset_timestamps 1 -break_non_keyframes 1 -map 0 ", duration);
 ffmpeg.StartInfo.Arguments += String.Format("-initial_offset {0} ", index * duration);
 ffmpeg.StartInfo.Arguments += String.Format("-f segment -segment_format mpegts {0}", format);

 ffmpeg.StartInfo.CreateNoWindow = true;
 ffmpeg.StartInfo.UseShellExecute = false;
 ffmpeg.StartInfo.RedirectStandardError = false;
 ffmpeg.StartInfo.RedirectStandardOutput = false;

 ffmpeg.Start();

 do
 {
 Thread.Sleep(waitInterval);
 totalWaitTime += waitInterval;
 }
 while ((!ffmpeg.HasExited) && (totalWaitTime < timeout));

 if (ffmpeg.HasExited)
 {
 string filename = System.IO.Path.Combine(temp.FullName, "output-00000.ts");

 if (!File.Exists(filename))
 {
 throw new FileNotFoundException("Unable to find the generated segment: " + filename);
 }

 output = File.ReadAllBytes(filename);
 }
 else
 {
 // It's been too long. Kill it!
 ffmpeg.Kill();
 }
 }

 // Remove the temporary directory and all its contents.
 temp.Delete(true);

 return output;
}

[HttpGet]
[Route("stream/{id}/{index}.ts")]
public IActionResult Segment(string id, int index)
{
 string path = RetrieveVideoPath(id);
 return File(GenerateVideoOnDemandSegment(index, 10, path), "application/x-mpegURL", true);
}
</byte>


So as you can see, here's the command I use to generate each segment incrementing -ss and -initial_offset by 10 for each segment :


ffmpeg -ss 0 -y -t 10 -i "video.mov" -c:v libx264 -c:a aac -segment_time 10 -reset_timestamps 1 -break_non_keyframes 1 -map 0 -initial_offset 0 -f segment -segment_format mpegts /var/folders/8h/3xdhhky96b5bk2w2br6bt8n00000gn/T/4ynrwu0q.z24/output-%05d.ts



The Problem


Things work on a functional level, however the transition between segments is slightly glitchy and especially the audio has very short interruptions at each 10 second mark. How can I ensure the segments are seamless ? What can I improve in this process ?


-
FFmpeg batch file - combine individual set files with randomized selection from another set of files
4 août 2018, par SiampuI need to combine a specific set of files with a randomized selection from another set of files ; for more specific context, voice clips followed by a randomized walky-talky beep. At the moment, I’ve managed to assemble this so far from searching around :
setlocal EnableDelayedExpansion
cd beeps
set n=0
for %%f in (*.*) do (
set /A n+=1
set "file[!n!]=%%f"
)
set /A "rand=(n*%random%)/32768+1"
cd ..
for %%A IN (*.ogg) DO ffmpeg -y -i radio_beep.wav -i "%%A" -i "beeps\!file[%rand%]!" -filter_complex "[0:a:0][1:a:0][2:a:0]concat=n=3:v=0:a=1[outa]" -map "[outa]" "helper\%%A"At the moment, this will only run the randomization once and use that selection for every file. How can I have it do the randomization for each .ogg in the folder, and get that into FFmpeg as an input ?
-
How do I use FFMPEG/libav to access the data in individual audio samples ?
15 octobre 2022, par BreadsnshredsThe end result is I'm trying to visualise the audio waveform to use in a DAW-like software. So I want to get each sample's value and draw it. With that in mind, I'm currently stumped by trying to gain access to the values stored in each sample. For the time being, I'm just trying to access the value in the first sample - I'll build it into a loop once I have some working code.


I started off by following the code in this example. However, LibAV/FFMPEG has been updated since then, so a lot of the code is deprecated or straight up doesn't work the same anymore.


Based on the example above, I believe the logic is as follows :


- 

- get the formatting info of the audio file
- get audio stream info from the format
- check that the codec required for the stream is an audio codec
- get the codec context (I think this is info about the codec) - This is where it gets kinda confusing for me
- create an empty packet and frame to use - packets are for holding compressed data and frames are for holding uncompressed data
- the format reads the first frame from the audio file into our packet
- pass that packet into the codec context to be decoded
- pass our frame to the codec context to receive the uncompressed audio data of the first frame
- create a buffer to hold the values and try allocating samples to it from our frame




















From debugging my code, I can see that step 7 succeeds and the packet that was empty receives some data. In step 8, the frame doesn't receive any data. This is what I need help with. I get that if I get the frame, assuming a stereo audio file, I should have two samples per frame, so really I just need your help to get uncompressed data into the frame.


I've scoured through the documentation for loads of different classes and I'm pretty sure I'm using the right classes and functions to achieve my goal, but evidently not (I'm also using Qt, so I'm using qDebug throughout, and QString to hold the URL for the audio file as path). So without further ado, here's my code :


// Step 1 - get the formatting info of the audio file
 AVFormatContext* format = avformat_alloc_context();
 if (avformat_open_input(&format, path.toStdString().c_str(), NULL, NULL) != 0) {
 qDebug() << "Could not open file " << path;
 return -1;
 }

// Step 2 - get audio stream info from the format
 if (avformat_find_stream_info(format, NULL) < 0) {
 qDebug() << "Could not retrieve stream info from file " << path;
 return -1;
 }

// Step 3 - check that the codec required for the stream is an audio codec
 int stream_index =- 1;
 for (unsigned int i=0; inb_streams; i++) {
 if (format->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
 stream_index = i;
 break;
 }
 }

 if (stream_index == -1) {
 qDebug() << "Could not retrieve audio stream from file " << path;
 return -1;
 }

// Step 4 -get the codec context
 const AVCodec *codec = avcodec_find_decoder(format->streams[stream_index]->codecpar->codec_id);
 AVCodecContext *codecContext = avcodec_alloc_context3(codec);
 avcodec_open2(codecContext, codec, NULL);

// Step 5 - create an empty packet and frame to use
 AVPacket *packet = av_packet_alloc();
 AVFrame *frame = av_frame_alloc();

// Step 6 - the format reads the first frame from the audio file into our packet
 av_read_frame(format, packet);
// Step 7 - pass that packet into the codec context to be decoded
 avcodec_send_packet(codecContext, packet);
//Step 8 - pass our frame to the codec context to receive the uncompressed audio data of the first frame
 avcodec_receive_frame(codecContext, frame);

// Step 9 - create a buffer to hold the values and try allocating samples to it from our frame
 double *buffer;
 av_samples_alloc((uint8_t**) &buffer, NULL, 1, frame->nb_samples, AV_SAMPLE_FMT_DBL, 0);
 qDebug () << "packet: " << &packet;
 qDebug() << "frame: " << frame;
 qDebug () << "buffer: " << buffer;



For the time being, step 9 is incomplete as you can probably tell. But for now, I need help with step 8. Am I missing a step, using the wrong function, wrong class ? Cheers.