Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (53)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (10509)

  • Difference between DirectShowSource() and FFmpegSource2() in AviSynth

    29 mars 2024, par MarianD

    For non .avi A/V sources (as .mp3, .mp4, etc.) there are (at least) 2 possibilities for reading those media files in AviSynth (in Windows) :

    



      

    • The built-in media filter DirectShowSource(), using Microsoft's DirectShow media architecture.
    • 


    • The AviSynth Plugin FFmpegSource2() alias FFMS2() using FFmpeg and nothing else.
    • 


    



    What are advantages and disadvantages of them ?
    
Which is more reliable, frame / sample accurate, etc.?

    


  • Why is chrominance lost when i OpenSharedResource from a ffmpeg AVFrame resource ?

    19 avril 2022, par Logoro
    D3D11_TEXTURE2D_DESC texture_desc = {0};&#xA;texture_desc.Width = 640;&#xA;texture_desc.Height = 480;&#xA;texture_desc.MipLevels = 1;&#xA;texture_desc.Format = DXGI_FORMAT_NV12;&#xA;texture_desc.SampleDesc.Count = 1;&#xA;texture_desc.ArraySize = 1;&#xA;texture_desc.Usage = D3D11_USAGE_DEFAULT;&#xA;texture_desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED;&#xA;&#xA;Microsoft::WRL::ComPtr<id3d11texture2d> temp_texture_for_my_device{nullptr};&#xA;my_device->CreateTexture2D(&amp;texture_desc, NULL, &amp;temp_texture_for_my_device);&#xA;&#xA;Microsoft::WRL::ComPtr<idxgiresource> dxgi_resource{nullptr};&#xA;temp_texture_for_my_device.As(&amp;dxgi_resource);&#xA;HANDLE shared_handle = NULL;&#xA;dxgi_resource->GetSharedHandle(&amp;shared_handle);&#xA;dxgi_resource->Release();&#xA;&#xA;Microsoft::WRL::ComPtr<id3d11texture2d> temp_texture_for_ffmpeg_device {nullptr};&#xA;ffmpeg_device->OpenSharedResource(shared_handle, __uuidof(ID3D11Texture2D), (void**)temp_texture_for_ffmpeg_device.GetAddressOf());&#xA;ffmpeg_device_context->CopySubresourceRegion(temp_texture_for_ffmpeg_device.Get(), 0, 0, 0, 0, (ID3D11Texture2D*)ffmpeg_avframe->data[0], (int)ffmpeg_avframe->data[1], NULL);&#xA;ffmpeg_device_context->Flush();&#xA;</id3d11texture2d></idxgiresource></id3d11texture2d>

    &#xA;

    I copy temp_texture_for_ffmpeg_device to a D3D11_USAGE_STAGING, it's normal, but when i copy temp_texture_for_my_device to a D3D11_USAGE_STAGING, i lost the chrominance data.

    &#xA;

    When i map the texture to cpu via D3D11_USAGE_STAGING :

    &#xA;

    temp_texture_for_ffmpeg_device : RowPitch is 768, DepthPitch is 768 * 720.&#xA;temp_texture_for_my_device : RowPitch is 1024, DepthPitch is 1024 * 480.

    &#xA;

    I think there are some different parameters between the two devices(or device context ?), but I don't know what parameters would cause such a difference in behavior.

    &#xA;

    my_device and my_device_context are created by D3D11On12CreateDevice

    &#xA;

  • How to HLS-live-stream incoming batches of individual frames, "appending" to a m3u8 playlist in real time, with ffmpeg ?

    20 novembre 2024, par Rob

    My overall goal :

    &#xA;&#xA;

    Server-side :

    &#xA;&#xA;

      &#xA;
    • I have batches of sequential, JPEG-encoded frames (8-16) arriving from time to time, generated at roughly 2 FPS.
    • &#xA;

    • I would like to host an HLS live stream, where, when a new batch of frames arrives, I encode those new frames as h264 .ts segments with ffmpeg, and have the new .ts segments automatically added to an HLS stream (e.g. .m3u8 file).
    • &#xA;

    &#xA;&#xA;

    Client/browser-side :

    &#xA;&#xA;

      &#xA;
    • When the .m3u8 is updated, I would like the video stream being watched to simply "continue", advancing from the point where new .ts segments have been added.
    • &#xA;

    • I do not need the user to scrub backwards in time, the client just needs to support live observation of the stream.
    • &#xA;

    &#xA;&#xA;

    &#xA;&#xA;

    My current approach :

    &#xA;&#xA;

    Server-side :

    &#xA;&#xA;

    To generate the "first" few segments of the stream, I'm attempting the below (just command-line for now to get ffmpeg working right, but ultimately will be automated via a Python script) :

    &#xA;&#xA;

    For reference, I'm using ffmpeg version 3.4.6-0ubuntu0.18.04.1.

    &#xA;&#xA;

    ffmpeg -y -framerate 2 -i /frames/batch1/frame_%d.jpg \&#xA;       -c:v libx264 -crf 21 -preset veryfast -g 2 \&#xA;       -f hls -hls_time 4 -hls_list_size 4 -segment_wrap 4 -segment_list_flags &#x2B;live video/stream.m3u8&#xA;

    &#xA;&#xA;

    where the /frames/batch1/ folder contains a sequence of frames (e.g. frame_01.jpg, frame_02.jpg, etc...). This already doesn't appear to work correctly, because it keeps adding #EXT-X-ENDLIST to the end of the .m3u8 file, which as I understand is not correct for a live HLS stream - here's what that generates :

    &#xA;&#xA;

    #EXTM3U&#xA;#EXT-X-VERSION:3&#xA;#EXT-X-TARGETDURATION:4&#xA;#EXT-X-MEDIA-SEQUENCE:0&#xA;#EXTINF:4.000000,&#xA;stream0.ts&#xA;#EXTINF:4.000000,&#xA;stream1.ts&#xA;#EXTINF:2.000000,&#xA;stream2.ts&#xA;#EXT-X-ENDLIST&#xA;

    &#xA;&#xA;

    I can't figure out how to suppress #EXT-X-ENDLIST here - this is problem #1.

    &#xA;&#xA;

    Then, to generate subsequent segments (e.g. when new frames become available), I'm trying this :

    &#xA;&#xA;

    ffmpeg -y -framerate 2 -start_number 20 -i /frames/batch2/frame_%d.jpg \&#xA;       -c:v libx264 -crf 21 -preset veryfast -g 2 \&#xA;       -f hls -hls_time 4 -hls_list_size 4 -segment_wrap 4 -segment_list_flags &#x2B;live video/stream.m3u8&#xA;

    &#xA;&#xA;

    Unfortunately, this does not work the way I want it to. It simply overwrites stream.m3u8, does and does not advance #EXT-X-MEDIA-SEQUENCE, it does not index the new .ts files correctly, and it also includes the undesirable #EXT-X-ENDLIST - this is the output of that command :

    &#xA;&#xA;

    #EXTM3U&#xA;#EXT-X-VERSION:3&#xA;#EXT-X-TARGETDURATION:4&#xA;#EXT-X-MEDIA-SEQUENCE:0&#xA;#EXTINF:4.000000,&#xA;stream0.ts&#xA;#EXTINF:4.000000,&#xA;stream1.ts&#xA;#EXTINF:3.000000,&#xA;stream2.ts&#xA;#EXT-X-ENDLIST&#xA;

    &#xA;&#xA;

    Fundamentally, I can't figure out how to "append" to an existing .m3u8 in a way that makes sense for HLS live streaming. That's essentially problem #2.

    &#xA;&#xA;

    For hosting the stream, I'm using a simple Flask app - which appears to be working the way I intend - here's what I'm doing for reference :

    &#xA;&#xA;

    @app.route(&#x27;/video/&#x27;)&#xA;def stream(file_name):&#xA;    video_dir = &#x27;./video&#x27;&#xA;    return send_from_directory(directory=video_dir, filename=file_name)&#xA;

    &#xA;&#xA;

    Client-side :

    &#xA;&#xA;

    I'm trying HLS.js in Chrome - basically boils down to this :

    &#xA;&#xA;

    <video></video>&#xA;&#xA;...&#xA;&#xA;<code class="echappe-js">&lt;script src=&quot;https://cdn.jsdelivr.net/npm/hls.js@latest&quot;&gt;&lt;/script&gt;&#xA;&lt;script&gt;&amp;#xA;   var video = document.getElementById(&amp;#x27;video1&amp;#x27;);&amp;#xA;   if (Hls.isSupported()) {&amp;#xA;     var hls = new Hls();&amp;#xA;     hls.loadSource(&amp;#x27;/video/stream.m3u8&amp;#x27;);&amp;#xA;     hls.attachMedia(video);&amp;#xA;     hls.on(Hls.Events.MANIFEST_PARSED, function() {&amp;#xA;       video.play();&amp;#xA;     });&amp;#xA;   }&amp;#xA;   else if (video.canPlayType(&amp;#x27;application/vnd.apple.mpegurl&amp;#x27;)) {&amp;#xA;     video.src = &amp;#x27;/video/stream.m3u8&amp;#x27;;&amp;#xA;     video.addEventListener(&amp;#x27;loadedmetadata&amp;#x27;, function() {&amp;#xA;       video.play();&amp;#xA;     });&amp;#xA;   }&amp;#xA;&lt;/script&gt;   &#xA;

    &#xA;&#xA;

    I'd like to think that what I'm trying to do doesn't require a more complex approach than what I'm trying above, but since what I'm trying to far definitely isn't working, I'm starting to think I need to come at this from a different angle. Any ideas on what I'm missing ?

    &#xA;&#xA;

    Edit :

    &#xA;&#xA;

    I've also attempted the same (again in Chrome) with video.js, and am seeing similar behavior - in particular, when I manually update the backing stream.m3u8 (with no #EXT-X-ENDLIST tag), videojs never picks up the new changes to the live stream, and just buffers/hangs indefinitely.

    &#xA;&#xA;

    <video class="video-js vjs-default-skin" muted="muted" controls="controls">&#xA;    <source type="application/x-mpegURL" src="/video/stream.m3u8">&#xA;</source></video>&#xA;&#xA;...&#xA;&#xA;<code class="echappe-js">&lt;script&gt;&amp;#xA;    var player = videojs(&amp;#x27;video1&amp;#x27;);&amp;#xA;    player.play();&amp;#xA;&lt;/script&gt;&#xA;

    &#xA;&#xA;

    For example, if I start with this initial version of stream.m3u8 :

    &#xA;&#xA;

    #EXTM3U&#xA;#EXT-X-PLAYLIST-TYPE:EVENT&#xA;#EXT-X-VERSION:3&#xA;#EXT-X-TARGETDURATION:8&#xA;#EXT-X-MEDIA-SEQUENCE:0&#xA;#EXTINF:4.000000,&#xA;stream0.ts&#xA;#EXTINF:4.000000,&#xA;stream1.ts&#xA;#EXTINF:2.000000,&#xA;stream2.ts&#xA;

    &#xA;&#xA;

    and then manually update it server-side to this :

    &#xA;&#xA;

    #EXTM3U&#xA;#EXT-X-PLAYLIST-TYPE:EVENT&#xA;#EXT-X-VERSION:3&#xA;#EXT-X-TARGETDURATION:8&#xA;#EXT-X-MEDIA-SEQUENCE:3&#xA;#EXTINF:4.000000,&#xA;stream3.ts&#xA;#EXTINF:4.000000,&#xA;stream4.ts&#xA;#EXTINF:3.000000,&#xA;stream5.ts&#xA;

    &#xA;&#xA;

    the video.js control just buffers indefinitely after only playing the first 3 segments (stream*.ts 0-2), which isn't what I'd expect to happen (I'd expect it to continue playing stream*.ts 3-5 once stream.m3u8 is updated and video.js makes a request for the latest version of the playlist).

    &#xA;