Recherche avancée

Médias (3)

Mot : - Tags -/pdf

Autres articles (88)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Organiser par catégorie

    17 mai 2013, par

    Dans MédiaSPIP, une rubrique a 2 noms : catégorie et rubrique.
    Les différents documents stockés dans MédiaSPIP peuvent être rangés dans différentes catégories. On peut créer une catégorie en cliquant sur "publier une catégorie" dans le menu publier en haut à droite ( après authentification ). Une catégorie peut être rangée dans une autre catégorie aussi ce qui fait qu’on peut construire une arborescence de catégories.
    Lors de la publication prochaine d’un document, la nouvelle catégorie créée sera proposée (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (10717)

  • On-demand and seamless transcoding of individual HLS segments

    5 janvier 2024, par Omid Ariyan

    Background

    


    I've been meaning to implement on-demand transcoding of certain video formats such as ".mkv", ".wmv", ".mov", etc. in order to serve them on a media management server using ASP.NET Core 6.0, C# and ffmpeg.

    


    My Approach

    


    The approach I've decided to use is to serve a dynamically generated .m3u8 file which is simply generated using a segment duration of choice e.g. 10s and the known video duration. Here's how I've done it. Note that the resolution is currently not implemented and discarded :

    


    public string GenerateVideoOnDemandPlaylist(double duration, int segment)
{
   double interval = (double)segment;
   var content = new StringBuilder();

   content.AppendLine("#EXTM3U");
   content.AppendLine("#EXT-X-VERSION:6");
   content.AppendLine(String.Format("#EXT-X-TARGETDURATION:{0}", segment));
   content.AppendLine("#EXT-X-MEDIA-SEQUENCE:0");
   content.AppendLine("#EXT-X-PLAYLIST-TYPE:VOD");
   content.AppendLine("#EXT-X-INDEPENDENT-SEGMENTS");

   for (double index = 0; (index * interval) < duration; index++)
   {
      content.AppendLine(String.Format("#EXTINF:{0:#.000000},", ((duration - (index * interval)) > interval) ? interval : ((duration - (index * interval)))));
      content.AppendLine(String.Format("{0:00000}.ts", index));
   }

   content.AppendLine("#EXT-X-ENDLIST");

   return content.ToString();
}

[HttpGet]
[Route("stream/{id}/{resolution}.m3u8")]
public IActionResult Stream(string id, string resolution)
{
   double duration = RetrieveVideoLengthInSeconds();
   return Content(GenerateVideoOnDemandPlaylist(duration, 10), "application/x-mpegURL", Encoding.UTF8);
}


    


    Here's an example of how the .m3u8 file looks like :

    


    #EXTM3U
#EXT-X-VERSION:6
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-INDEPENDENT-SEGMENTS
#EXTINF:10.000000,
00000.ts
#EXTINF:3.386667,
00001.ts
#EXT-X-ENDLIST


    


    So the player would ask for 00000.ts, 00001.ts, etc. and the next step is to have them generated on demand :

    


    public byte[] GenerateVideoOnDemandSegment(int index, int duration, string path)&#xA;{&#xA;   int timeout = 30000;&#xA;   int totalWaitTime = 0;&#xA;   int waitInterval = 100;&#xA;   byte[] output = Array.Empty<byte>();&#xA;   string executable = "/opt/homebrew/bin/ffmpeg";&#xA;   DirectoryInfo temp = Directory.CreateDirectory(System.IO.Path.Combine(System.IO.Path.GetTempPath(), System.IO.Path.GetRandomFileName()));&#xA;   string format = System.IO.Path.Combine(temp.FullName, "output-%05d.ts");&#xA;&#xA;   using (Process ffmpeg = new())&#xA;   {&#xA;      ffmpeg.StartInfo.FileName = executable;&#xA;&#xA;      ffmpeg.StartInfo.Arguments = String.Format("-ss {0} ", index * duration);&#xA;      ffmpeg.StartInfo.Arguments &#x2B;= String.Format("-y -t {0} ", duration);&#xA;      ffmpeg.StartInfo.Arguments &#x2B;= String.Format("-i \"{0}\" ", path);&#xA;      ffmpeg.StartInfo.Arguments &#x2B;= String.Format("-c:v libx264 -c:a aac ");&#xA;      ffmpeg.StartInfo.Arguments &#x2B;= String.Format("-segment_time {0} -reset_timestamps 1 -break_non_keyframes 1 -map 0 ", duration);&#xA;      ffmpeg.StartInfo.Arguments &#x2B;= String.Format("-initial_offset {0} ", index * duration);&#xA;      ffmpeg.StartInfo.Arguments &#x2B;= String.Format("-f segment -segment_format mpegts {0}", format);&#xA;&#xA;      ffmpeg.StartInfo.CreateNoWindow = true;&#xA;      ffmpeg.StartInfo.UseShellExecute = false;&#xA;      ffmpeg.StartInfo.RedirectStandardError = false;&#xA;      ffmpeg.StartInfo.RedirectStandardOutput = false;&#xA;&#xA;      ffmpeg.Start();&#xA;&#xA;      do&#xA;      {&#xA;         Thread.Sleep(waitInterval);&#xA;         totalWaitTime &#x2B;= waitInterval;&#xA;      }&#xA;      while ((!ffmpeg.HasExited) &amp;&amp; (totalWaitTime &lt; timeout));&#xA;&#xA;      if (ffmpeg.HasExited)&#xA;      {&#xA;         string filename = System.IO.Path.Combine(temp.FullName, "output-00000.ts");&#xA;&#xA;         if (!File.Exists(filename))&#xA;         {&#xA;            throw new FileNotFoundException("Unable to find the generated segment: " &#x2B; filename);&#xA;         }&#xA;&#xA;         output = File.ReadAllBytes(filename);&#xA;      }&#xA;      else&#xA;      {&#xA;         // It&#x27;s been too long. Kill it!&#xA;         ffmpeg.Kill();&#xA;      }&#xA;   }&#xA;&#xA;   // Remove the temporary directory and all its contents.&#xA;   temp.Delete(true);&#xA;&#xA;   return output;&#xA;}&#xA;&#xA;[HttpGet]&#xA;[Route("stream/{id}/{index}.ts")]&#xA;public IActionResult Segment(string id, int index)&#xA;{&#xA;   string path = RetrieveVideoPath(id);&#xA;   return File(GenerateVideoOnDemandSegment(index, 10, path), "application/x-mpegURL", true);&#xA;}&#xA;</byte>

    &#xA;

    So as you can see, here's the command I use to generate each segment incrementing -ss and -initial_offset by 10 for each segment :

    &#xA;

    ffmpeg -ss 0 -y -t 10 -i "video.mov" -c:v libx264 -c:a aac -segment_time 10 -reset_timestamps 1 -break_non_keyframes 1 -map 0 -initial_offset 0 -f segment -segment_format mpegts /var/folders/8h/3xdhhky96b5bk2w2br6bt8n00000gn/T/4ynrwu0q.z24/output-%05d.ts&#xA;

    &#xA;

    The Problem

    &#xA;

    Things work on a functional level, however the transition between segments is slightly glitchy and especially the audio has very short interruptions at each 10 second mark. How can I ensure the segments are seamless ? What can I improve in this process ?

    &#xA;

  • FFmpeg batch file - combine individual set files with randomized selection from another set of files

    4 août 2018, par Siampu

    I need to combine a specific set of files with a randomized selection from another set of files ; for more specific context, voice clips followed by a randomized walky-talky beep. At the moment, I’ve managed to assemble this so far from searching around :

    setlocal EnableDelayedExpansion
    cd beeps
    set n=0
    for %%f in (*.*) do (
       set /A n+=1
       set "file[!n!]=%%f"
    )
    set /A "rand=(n*%random%)/32768+1"
    cd ..
    for %%A IN (*.ogg) DO ffmpeg -y -i radio_beep.wav -i "%%A" -i "beeps\!file[%rand%]!" -filter_complex "[0:a:0][1:a:0][2:a:0]concat=n=3:v=0:a=1[outa]" -map "[outa]" "helper\%%A"

    At the moment, this will only run the randomization once and use that selection for every file. How can I have it do the randomization for each .ogg in the folder, and get that into FFmpeg as an input ?

  • How do I use FFMPEG/libav to access the data in individual audio samples ?

    15 octobre 2022, par Breadsnshreds

    The end result is I'm trying to visualise the audio waveform to use in a DAW-like software. So I want to get each sample's value and draw it. With that in mind, I'm currently stumped by trying to gain access to the values stored in each sample. For the time being, I'm just trying to access the value in the first sample - I'll build it into a loop once I have some working code.

    &#xA;

    I started off by following the code in this example. However, LibAV/FFMPEG has been updated since then, so a lot of the code is deprecated or straight up doesn't work the same anymore.

    &#xA;

    Based on the example above, I believe the logic is as follows :

    &#xA;

      &#xA;
    1. get the formatting info of the audio file
    2. &#xA;

    3. get audio stream info from the format
    4. &#xA;

    5. check that the codec required for the stream is an audio codec
    6. &#xA;

    7. get the codec context (I think this is info about the codec) - This is where it gets kinda confusing for me
    8. &#xA;

    9. create an empty packet and frame to use - packets are for holding compressed data and frames are for holding uncompressed data
    10. &#xA;

    11. the format reads the first frame from the audio file into our packet
    12. &#xA;

    13. pass that packet into the codec context to be decoded
    14. &#xA;

    15. pass our frame to the codec context to receive the uncompressed audio data of the first frame
    16. &#xA;

    17. create a buffer to hold the values and try allocating samples to it from our frame
    18. &#xA;

    &#xA;

    From debugging my code, I can see that step 7 succeeds and the packet that was empty receives some data. In step 8, the frame doesn't receive any data. This is what I need help with. I get that if I get the frame, assuming a stereo audio file, I should have two samples per frame, so really I just need your help to get uncompressed data into the frame.

    &#xA;

    I've scoured through the documentation for loads of different classes and I'm pretty sure I'm using the right classes and functions to achieve my goal, but evidently not (I'm also using Qt, so I'm using qDebug throughout, and QString to hold the URL for the audio file as path). So without further ado, here's my code :

    &#xA;

    // Step 1 - get the formatting info of the audio file&#xA;    AVFormatContext* format = avformat_alloc_context();&#xA;    if (avformat_open_input(&amp;format, path.toStdString().c_str(), NULL, NULL) != 0) {&#xA;        qDebug() &lt;&lt; "Could not open file " &lt;&lt; path;&#xA;        return -1;&#xA;    }&#xA;&#xA;// Step 2 - get audio stream info from the format&#xA;    if (avformat_find_stream_info(format, NULL) &lt; 0) {&#xA;        qDebug() &lt;&lt; "Could not retrieve stream info from file " &lt;&lt; path;&#xA;        return -1;&#xA;    }&#xA;&#xA;// Step 3 - check that the codec required for the stream is an audio codec&#xA;    int stream_index =- 1;&#xA;    for (unsigned int i=0; inb_streams; i&#x2B;&#x2B;) {&#xA;        if (format->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {&#xA;            stream_index = i;&#xA;            break;&#xA;        }&#xA;    }&#xA;&#xA;    if (stream_index == -1) {&#xA;        qDebug() &lt;&lt; "Could not retrieve audio stream from file " &lt;&lt; path;&#xA;        return -1;&#xA;    }&#xA;&#xA;// Step 4 -get the codec context&#xA;    const AVCodec *codec = avcodec_find_decoder(format->streams[stream_index]->codecpar->codec_id);&#xA;    AVCodecContext *codecContext = avcodec_alloc_context3(codec);&#xA;    avcodec_open2(codecContext, codec, NULL);&#xA;&#xA;// Step 5 - create an empty packet and frame to use&#xA;    AVPacket *packet = av_packet_alloc();&#xA;    AVFrame *frame = av_frame_alloc();&#xA;&#xA;// Step 6 - the format reads the first frame from the audio file into our packet&#xA;    av_read_frame(format, packet);&#xA;// Step 7 - pass that packet into the codec context to be decoded&#xA;    avcodec_send_packet(codecContext, packet);&#xA;//Step 8 - pass our frame to the codec context to receive the uncompressed audio data of the first frame&#xA;    avcodec_receive_frame(codecContext, frame);&#xA;&#xA;// Step 9 - create a buffer to hold the values and try allocating samples to it from our frame&#xA;    double *buffer;&#xA;    av_samples_alloc((uint8_t**) &amp;buffer, NULL, 1, frame->nb_samples, AV_SAMPLE_FMT_DBL, 0);&#xA;    qDebug () &lt;&lt; "packet: " &lt;&lt; &amp;packet;&#xA;    qDebug() &lt;&lt; "frame: " &lt;&lt;  frame;&#xA;    qDebug () &lt;&lt; "buffer: " &lt;&lt; buffer;&#xA;

    &#xA;

    For the time being, step 9 is incomplete as you can probably tell. But for now, I need help with step 8. Am I missing a step, using the wrong function, wrong class ? Cheers.

    &#xA;