Recherche avancée

Médias (1)

Mot : - Tags -/musée

Autres articles (66)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (7709)

  • FFmpeg Overwiting Playlist

    30 septembre 2024, par Program-Me-Rev

    I'm working on an implementation where I aim to generate a DASH Playlist from Raw Camera2 data in Android Java using FFmpeg

    


    However , the current implementation only produces Three .m4s files regardless of how long the recording lasts . My goal is to create a playlist with 1-second .m4s Segments , but the output only includes the following files , and the video length doesn't exceed 2 seconds :

    


    - playlist.mpd
- init.m4s
- 1.m4s
- 2.m4s


    


    While the temporary files are created as expected , the .m4s files stop after these two segments . Additionally , only the last 2 seconds of the recording are retained , no matter how long the recording runs

    


    The FFmpeg output indicates that FFmpeg is repeatedly overwriting the previously generated playlist , which may explain why the recording doesn't extend beyond 2 seconds

    


    FFmpeg version : 6.0

    


        package rev.ca.rev_media_dash_camera2;&#xA;&#xA;    import android.app.Activity;&#xA;    import android.content.Context;&#xA;    import android.media.Image;&#xA;    import android.util.Log;&#xA;    import android.util.Size;&#xA;&#xA;    import androidx.annotation.NonNull;&#xA;    import androidx.camera.core.CameraSelector;&#xA;    import androidx.camera.core.ImageAnalysis;&#xA;    import androidx.camera.core.ImageProxy;&#xA;    import androidx.camera.core.Preview;&#xA;    import androidx.camera.lifecycle.ProcessCameraProvider;&#xA;    import androidx.camera.view.PreviewView;&#xA;    import androidx.core.content.ContextCompat;&#xA;    import androidx.lifecycle.LifecycleOwner;&#xA;&#xA;    import com.arthenica.ffmpegkit.FFmpegKit;&#xA;    import com.arthenica.ffmpegkit.ReturnCode;&#xA;    import com.google.common.util.concurrent.ListenableFuture;&#xA;&#xA;    import java.io.File;&#xA;    import java.io.FileOutputStream;&#xA;    import java.io.IOException;&#xA;    import java.nio.ByteBuffer;&#xA;    import java.util.concurrent.ExecutionException;&#xA;    import java.util.concurrent.ExecutorService;&#xA;    import java.util.concurrent.Executors;&#xA;&#xA;    public class RevCameraCapture {&#xA;        private static final String REV_TAG = "RevCameraCapture";&#xA;&#xA;        private final Context revContext;&#xA;        private final ExecutorService revExecutorService;&#xA;        private final String revOutDirPath = "/storage/emulated/0/Documents/Owki/rev_web_rtc_live_chat_temp_files/_abc_rev_uploads_temp";&#xA;        private boolean isRevRecording;&#xA;        private File revTempFile;&#xA;        private int revFrameCount = 0; // Counter for frames captured&#xA;&#xA;        public RevCameraCapture(Context revContext) {&#xA;            this.revContext = revContext;&#xA;&#xA;            revInitDir(revOutDirPath);&#xA;            revCheckOrCreatePlaylist();&#xA;&#xA;            revExecutorService = Executors.newSingleThreadExecutor();&#xA;        }&#xA;&#xA;        private void revInitDir(String revDirPath) {&#xA;            // Create a File object for the directory&#xA;            File revNestedDir = new File(revDirPath);&#xA;&#xA;            // Check if the directory exists, if not, create it&#xA;            if (!revNestedDir.exists()) {&#xA;                boolean revResult = revNestedDir.mkdirs();  // mkdirs() creates the whole path&#xA;                if (revResult) {&#xA;                    Log.e(REV_TAG, ">>> Directories created successfully.");&#xA;                } else {&#xA;                    Log.e(REV_TAG, ">>> Failed to create directories.");&#xA;                }&#xA;            } else {&#xA;                Log.e(REV_TAG, ">>> Directories already exist.");&#xA;            }&#xA;        }&#xA;&#xA;        private void revCheckOrCreatePlaylist() {&#xA;            File revPlaylistFile = new File(revOutDirPath, "rev_playlist.mpd");&#xA;            if (!revPlaylistFile.exists()) {&#xA;                // Create an empty playlist if it doesn&#x27;t exist&#xA;                try {&#xA;                    FileOutputStream revFos = new FileOutputStream(revPlaylistFile);&#xA;                    revFos.write("".getBytes());&#xA;                    revFos.close();&#xA;                } catch (IOException e) {&#xA;                    Log.e(REV_TAG, ">>> Error creating initial rev_playlist : ", e);&#xA;                }&#xA;            }&#xA;        }&#xA;&#xA;&#xA;        private void revStartFFmpegProcess() {&#xA;            // Ensure revTempFile exists before processing&#xA;            if (revTempFile == null || !revTempFile.exists()) {&#xA;                Log.e(REV_TAG, ">>> Temporary file does not exist for FFmpeg processing.");&#xA;                return;&#xA;            }&#xA;&#xA;            // FFmpeg command to convert the temp file to DASH format and append to the existing rev_playlist&#xA;            String ffmpegCommand = "-f rawvideo -pixel_format yuv420p -video_size 704x704 " &#x2B; "-i " &#x2B; revTempFile.getAbsolutePath() &#x2B; " -c:v mpeg4 -b:v 1M " &#x2B; "-f dash -seg_duration 1 -use_template 1 -use_timeline 1 " &#x2B; "-init_seg_name &#x27;init.m4s&#x27; -media_seg_name &#x27;$Number$.m4s&#x27; " &#x2B; revOutDirPath &#x2B; "/rev_playlist.mpd -loglevel debug";&#xA;&#xA;&#xA;            FFmpegKit.executeAsync(ffmpegCommand, session -> {&#xA;                ReturnCode returnCode = session.getReturnCode();&#xA;                if (ReturnCode.isSuccess(returnCode)) {&#xA;                    // Optionally handle success, e.g., log or notify that the process completed successfully&#xA;                } else {&#xA;                    Log.e(REV_TAG, ">>> FFmpeg process failed with return code : " &#x2B; returnCode);&#xA;                }&#xA;            });&#xA;        }&#xA;&#xA;&#xA;        public void revStartCamera() {&#xA;            isRevRecording = true;&#xA;&#xA;            ListenableFuture<processcameraprovider> revCameraProviderFuture = ProcessCameraProvider.getInstance(revContext);&#xA;&#xA;            revCameraProviderFuture.addListener(() -> {&#xA;                try {&#xA;                    ProcessCameraProvider revCameraProvider = revCameraProviderFuture.get();&#xA;                    revBindPreview(revCameraProvider);&#xA;                    revBindImageAnalysis(revCameraProvider);&#xA;                } catch (ExecutionException | InterruptedException e) {&#xA;                    Log.e(REV_TAG, ">>> Failed to start camera : ", e);&#xA;                }&#xA;            }, ContextCompat.getMainExecutor(revContext));&#xA;        }&#xA;&#xA;        private void revBindPreview(ProcessCameraProvider revCameraProvider) {&#xA;            CameraSelector revCameraSelector = new CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_BACK).build();&#xA;&#xA;            PreviewView previewView = ((Activity) revContext).findViewById(R.id.previewView);&#xA;            Preview preview = new Preview.Builder().build();&#xA;            preview.setSurfaceProvider(previewView.getSurfaceProvider());&#xA;&#xA;            revCameraProvider.unbindAll();&#xA;            revCameraProvider.bindToLifecycle((LifecycleOwner) revContext, revCameraSelector, preview);&#xA;        }&#xA;&#xA;        private void revBindImageAnalysis(@NonNull ProcessCameraProvider revCameraProvider) {&#xA;            ImageAnalysis revImageAnalysis = new ImageAnalysis.Builder().setTargetResolution(new Size(640, 480)) // Lower the resolution to reduce memory consumption&#xA;                    .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST).build();&#xA;&#xA;            revImageAnalysis.setAnalyzer(ContextCompat.getMainExecutor(revContext), this::revAnalyze);&#xA;            CameraSelector revCameraSelector = new CameraSelector.Builder().requireLensFacing(CameraSelector.LENS_FACING_BACK).build();&#xA;&#xA;            revCameraProvider.bindToLifecycle((LifecycleOwner) revContext, revCameraSelector, revImageAnalysis);&#xA;        }&#xA;&#xA;        @androidx.annotation.OptIn(markerClass = androidx.camera.core.ExperimentalGetImage.class)&#xA;        private void revAnalyze(@NonNull ImageProxy revImageProxy) {&#xA;            try {&#xA;                revProcessImageFrame(revImageProxy);&#xA;            } catch (Exception e) {&#xA;                Log.e(REV_TAG, ">>> Error processing revImage frame", e);&#xA;            } finally {&#xA;                revImageProxy.close(); // Always close the revImageProxy&#xA;            }&#xA;        }&#xA;&#xA;        @androidx.annotation.OptIn(markerClass = androidx.camera.core.ExperimentalGetImage.class)&#xA;        private void revProcessImageFrame(@NonNull ImageProxy revImageProxy) {&#xA;            Image revImage = revImageProxy.getImage();&#xA;            if (revImage != null) {&#xA;                byte[] revImageBytes = revConvertYUV420888ToByteArray(revImage);&#xA;                revWriteFrameToTempFile(revImageBytes); // Write frame to a temporary file&#xA;            }&#xA;            revImageProxy.close(); // Close the ImageProxy to release the revImage buffer&#xA;        }&#xA;&#xA;        private byte[] revConvertYUV420888ToByteArray(Image revImage) {&#xA;            Image.Plane[] planes = revImage.getPlanes();&#xA;            ByteBuffer revBufferY = planes[0].getBuffer();&#xA;            ByteBuffer revBufferU = planes[1].getBuffer();&#xA;            ByteBuffer revBufferV = planes[2].getBuffer();&#xA;&#xA;            int revWidth = revImage.getWidth();&#xA;            int revHeight = revImage.getHeight();&#xA;&#xA;            int revSizeY = revWidth * revHeight;&#xA;            int revSizeUV = (revWidth / 2) * (revHeight / 2); // U and V sizes are half the Y size&#xA;&#xA;            // Total size = Y &#x2B; U &#x2B; V&#xA;            byte[] revData = new byte[revSizeY &#x2B; 2 * revSizeUV];&#xA;&#xA;            // Copy Y plane&#xA;            revBufferY.get(revData, 0, revSizeY);&#xA;&#xA;            // Copy U and V planes, accounting for row stride and pixel stride&#xA;            int revOffset = revSizeY;&#xA;            int revPixelStrideU = planes[1].getPixelStride();&#xA;            int rowStrideU = planes[1].getRowStride();&#xA;            int revPixelStrideV = planes[2].getPixelStride();&#xA;            int rowStrideV = planes[2].getRowStride();&#xA;&#xA;            // Copy U plane&#xA;            for (int row = 0; row &lt; revHeight / 2; row&#x2B;&#x2B;) {&#xA;                for (int col = 0; col &lt; revWidth / 2; col&#x2B;&#x2B;) {&#xA;                    revData[revOffset&#x2B;&#x2B;] = revBufferU.get(row * rowStrideU &#x2B; col * revPixelStrideU);&#xA;                }&#xA;            }&#xA;&#xA;            // Copy V plane&#xA;            for (int row = 0; row &lt; revHeight / 2; row&#x2B;&#x2B;) {&#xA;                for (int col = 0; col &lt; revWidth / 2; col&#x2B;&#x2B;) {&#xA;                    revData[revOffset&#x2B;&#x2B;] = revBufferV.get(row * rowStrideV &#x2B; col * revPixelStrideV);&#xA;                }&#xA;            }&#xA;&#xA;            return revData;&#xA;        }&#xA;&#xA;&#xA;        private void revWriteFrameToTempFile(byte[] revImageBytes) {&#xA;            revExecutorService.execute(() -> {&#xA;                try {&#xA;                    // Create a new temp file for each segment if needed&#xA;                    if (revTempFile == null || revFrameCount == 0) {&#xA;                        revTempFile = File.createTempFile("vid_segment_", ".yuv", new File(revOutDirPath));&#xA;                    }&#xA;&#xA;                    try (FileOutputStream revFos = new FileOutputStream(revTempFile, true)) {&#xA;                        revFos.write(revImageBytes);&#xA;                    }&#xA;&#xA;                    revFrameCount&#x2B;&#x2B;;&#xA;&#xA;                    // Process after 60 frames (2 second for 30 fps)&#xA;                    if (revFrameCount >= 60 &amp;&amp; isRevRecording) {&#xA;                        revStartFFmpegProcess();  // Process the segment with FFmpeg&#xA;                        revFrameCount = 0;  // Reset the frame count&#xA;                        revTempFile = null;  // Reset temp file for the next segment&#xA;                    }&#xA;&#xA;                } catch (IOException e) {&#xA;                    Log.e(REV_TAG, ">>> Error writing frame to temp file : ", e);&#xA;                }&#xA;            });&#xA;        }&#xA;&#xA;        public void revStopCamera() {&#xA;            isRevRecording = false;&#xA;            if (revTempFile != null &amp;&amp; revTempFile.exists()) {&#xA;                revTempFile.delete(); // Clean up the temporary file&#xA;                revTempFile = null; // Reset the temp file reference&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;&#xA;    package rev.ca.rev_media_dash_camera2;&#xA;&#xA;    import android.os.Bundle;&#xA;&#xA;    import androidx.appcompat.app.AppCompatActivity;&#xA;&#xA;    public class MainActivity extends AppCompatActivity {&#xA;        private RevCameraCapture revCameraCapture;&#xA;&#xA;        @Override&#xA;        protected void onCreate(Bundle savedInstanceState) {&#xA;            super.onCreate(savedInstanceState);&#xA;            setContentView(R.layout.activity_main);&#xA;&#xA;            revCameraCapture = new RevCameraCapture(this);&#xA;        }&#xA;&#xA;        @Override&#xA;        protected void onStart() {&#xA;            super.onStart();&#xA;            try {&#xA;                revCameraCapture.revStartCamera();&#xA;            } catch (Exception e) {&#xA;                e.printStackTrace();&#xA;            }&#xA;        }&#xA;&#xA;        @Override&#xA;        protected void onStop() {&#xA;            super.onStop();&#xA;            revCameraCapture.revStopCamera(); // Ensure camera is stopped when not in use&#xA;        }&#xA;    }&#xA;</processcameraprovider>

    &#xA;

  • RTSP to HLS via FFMPEG, latency issues

    28 juin 2024, par Pabl0

    The following are all the steps that I took to render a RTSP stream in my web app :

    &#xA;

    How to display RTSP stream in browser using HLS

    &#xA;

    Situation and Problem&#xA;You have an RTSP stream that you want to display in a browser using HLS (HTTP Live Streaming). However, when you try to play the RTSP stream in the browser using hls.js, you encounter the error "Unsupported HEVC in M2TS found." This error indicates that the HLS stream uses the HEVC (H.265) codec, which is not widely supported by many browsers and HLS players, including hls.js.

    &#xA;

    The most reliable solution is to transcode the stream from H.265 to H.264 using FFmpeg, which is more broadly supported. Here's how to transcode the stream :

    &#xA;

    Step 1 : Transcode the Stream Using FFmpeg

    &#xA;

    Run the following FFmpeg command to transcode the RTSP stream from H.265 to H.264 and generate the HLS segments :

    &#xA;

    ffmpeg -i rtsp://192.168.144.25:8554/main.264 -c:v libx264 -c:a aac -strict -2 -hls_time 10 -hls_list_size 0 -f hls C:\path\to\output\index.m3u8&#xA;

    &#xA;

    c:v libx264 sets the video codec to H.264.

    &#xA;

    c:a aac sets the audio codec to AAC.

    &#xA;

    hls_time 10 sets the duration of each segment to 10 seconds.

    &#xA;

    hls_list_size 0 tells FFmpeg to include all segments in the playlist.

    &#xA;

    f hls specifies the output format as HLS.

    &#xA;

    C :\path\to\output\ is the directory where the HLS files will be saved. Ensure that C :\path\to\output\ is the directory where you want to save the HLS files.

    &#xA;

    Step 2 : Verify the HLS Files

    &#xA;

    After running the FFmpeg command, verify that the following files are generated in the output directory :

    &#xA;

    index.m3u8 (HLS playlist file)

    &#xA;

    Multiple .ts segment files (e.g., index0.ts, index1.ts, etc.)

    &#xA;

    Step 3 : Serve the HLS Files with an HTTP Server

    &#xA;

    Navigate to the directory containing the HLS files and start the HTTP server :

    &#xA;

    cd C :\path\to\output&#xA;python -m http.server 8000&#xA;Step 4 : Update and Test the HTML File&#xA;Ensure that hls_test.html file is in the same directory as the HLS files and update it as needed :

    &#xA;

    hls_test.html :

    &#xA;

    &#xA;&#xA;    &#xA;        &#xA;        &#xA;        &#xA;    &#xA;    &#xA;        <h1>HLS Stream Test</h1>&#xA;        <button>Play Stream</button>&#xA;        <video controls="controls" style="width: 100%; height: auto;"></video>&#xA;        <code class="echappe-js">&lt;script src=&quot;https://cdn.jsdelivr.net/npm/hls.js@latest&quot;&gt;&lt;/script&gt;&#xA;        &lt;script&gt;&amp;#xA;            document&amp;#xA;                .getElementById(&amp;#x27;playButton&amp;#x27;)&amp;#xA;                .addEventListener(&amp;#x27;click&amp;#x27;, () =&gt; {&amp;#xA;                    const video = document.getElementById(&amp;#x27;video&amp;#x27;);&amp;#xA;                    if (Hls.isSupported()) {&amp;#xA;                        const hls = new Hls();&amp;#xA;                        hls.loadSource(&amp;#x27;http://localhost:8000/index.m3u8&amp;#x27;);&amp;#xA;                        hls.attachMedia(video);&amp;#xA;                        hls.on(Hls.Events.MANIFEST_PARSED, function () {&amp;#xA;                            video.play().catch((error) =&gt; {&amp;#xA;                                console.error(&amp;#xA;                                    &amp;#x27;Error attempting to play:&amp;#x27;,&amp;#xA;                                    error,&amp;#xA;                                );&amp;#xA;                            });&amp;#xA;                        });&amp;#xA;                        hls.on(Hls.Events.ERROR, function (event, data) {&amp;#xA;                            console.error(&amp;#x27;HLS Error:&amp;#x27;, data);&amp;#xA;                        });&amp;#xA;                    } else if (&amp;#xA;                        video.canPlayType(&amp;#x27;application/vnd.apple.mpegurl&amp;#x27;)&amp;#xA;                    ) {&amp;#xA;                        video.src = &amp;#x27;http://localhost:8000/index.m3u8&amp;#x27;;&amp;#xA;                        video.addEventListener(&amp;#x27;canplay&amp;#x27;, function () {&amp;#xA;                            video.play().catch((error) =&gt; {&amp;#xA;                                console.error(&amp;#xA;                                    &amp;#x27;Error attempting to play:&amp;#x27;,&amp;#xA;                                    error,&amp;#xA;                                );&amp;#xA;                            });&amp;#xA;                        });&amp;#xA;                    } else {&amp;#xA;                        console.error(&amp;#x27;HLS not supported in this browser.&amp;#x27;);&amp;#xA;                    }&amp;#xA;                });&amp;#xA;        &lt;/script&gt;&#xA;    &#xA;&#xA;

    &#xA;

    Step 5 : Open the HTML File in Your Browser

    &#xA;

    Open your browser and navigate to :

    &#xA;

    http://localhost:8000/hls_test.html&#xA;

    &#xA;

    Click the "Play Stream" button to start playing the HLS stream. If everything is set up correctly, you should see the video playing in the browser.

    &#xA;

    Conclusion

    &#xA;

    By transcoding the RTSP stream from H.265 to H.264 and serving it as an HLS stream, you can display the video in a browser using hls.js. This approach ensures broader compatibility with browsers and HLS players, allowing you to stream video content seamlessly.

    &#xA;

    PART 2 : Add this method to the react app

    &#xA;

    We are assuming that the ffmpeg command is running in the background and generating the HLS stream. Now, we will create a React component that plays the HLS stream in the browser using the video.js library.

    &#xA;

    If not, please refer to the previous steps to generate the HLS stream using FFmpeg. (steps 1-3 of the previous section)

    &#xA;

    Step 1 : Create the Camera Component

    &#xA;

    import { useRef } from &#x27;react&#x27;;&#xA;import videojs from &#x27;video.js&#x27;;&#xA;import &#x27;video.js/dist/video-js.css&#x27;;&#xA;&#xA;const Camera = ({ streamUrl }) => {&#xA;    const videoRef = useRef(null);&#xA;    const playerRef = useRef(null);&#xA;&#xA;    const handlePlayClick = () => {&#xA;        const videoElement = videoRef.current;&#xA;        if (videoElement) {&#xA;            playerRef.current = videojs(videoElement, {&#xA;                controls: true,&#xA;                autoplay: false,&#xA;                preload: &#x27;auto&#x27;,&#xA;                sources: [&#xA;                    {&#xA;                        src: streamUrl,&#xA;                        type: &#x27;application/x-mpegURL&#x27;,&#xA;                    },&#xA;                ],&#xA;            });&#xA;&#xA;            playerRef.current.on(&#x27;error&#x27;, () => {&#xA;                const error = playerRef.current.error();&#xA;                console.error(&#x27;VideoJS Error:&#x27;, error);&#xA;            });&#xA;&#xA;            playerRef.current.play().catch((error) => {&#xA;                console.error(&#x27;Error attempting to play:&#x27;, error);&#xA;            });&#xA;        }&#xA;    };&#xA;&#xA;    return (&#xA;        &#xA;            <button>Play Stream</button>&#xA;            &#xA;        &#xA;    );&#xA;};&#xA;&#xA;export default Camera;&#xA;

    &#xA;

    Note : This component uses the video.js library to play the HLS stream. Make sure to install video.js using npm or yarn :

    &#xA;

    npm install video.js

    &#xA;

    Step 2 : Use the Camera Component in Your App

    &#xA;

    Now, you can use the Camera component in your React app to display the HLS stream. Here's an example of how to use the Camera component :

    &#xA;

    <camera streamurl="http://localhost:8000/index.m3u8"></camera>

    &#xA;

    Note : see we are pointing to the HLS stream URL generated by FFmpeg in the previous steps.

    &#xA;

    Step 3 : Create the Cors Proxy Server and place it where the HLS files are being stored.

    &#xA;

    from http.server import HTTPServer, SimpleHTTPRequestHandler&#xA;import socketserver&#xA;import os&#xA;&#xA;class CORSRequestHandler(SimpleHTTPRequestHandler):&#xA;    def end_headers(self):&#xA;        if self.path.endswith(&#x27;.m3u8&#x27;):&#xA;            self.send_header(&#x27;Content-Type&#x27;, &#x27;application/vnd.apple.mpegurl&#x27;)&#xA;        elif self.path.endswith(&#x27;.ts&#x27;):&#xA;            self.send_header(&#x27;Content-Type&#x27;, &#x27;video/MP2T&#x27;)&#xA;        super().end_headers()&#xA;&#xA;if __name__ == &#x27;__main__&#x27;:&#xA;    port = 8000&#xA;    handler = CORSRequestHandler&#xA;    web_dir = r&#x27;C:\Video_cam_usv&#x27;&#xA;    os.chdir(web_dir)&#xA;    httpd = socketserver.TCPServer((&#x27;&#x27;, port), handler)&#xA;    print(f"Serving HTTP on port {port}")&#xA;    httpd.serve_forever()&#xA;

    &#xA;

    Note : Change the web_dir to the directory where the HLS files are stored.

    &#xA;

    Also, note that the server is sending the correct MIME types for .m3u8 and .ts files. For example :

    &#xA;

    .m3u8 should be application/vnd.apple.mpegurl or application/x-mpegURL.&#xA;.ts should be video/MP2T.&#xA;

    &#xA;

    Step 4 : Start the CORS Proxy Server

    &#xA;

    Open a terminal, navigate to the directory where the CORS proxy server script is located (same as the HLS files are being saved), and run the following command :

    &#xA;

    python cors_proxy_server.py&#xA;

    &#xA;

    This will start the CORS proxy server on port 8000 and serve the HLS files with the correct MIME types.

    &#xA;

    Step 5 : Start the React App&#xA;Start your React app using the following command :

    &#xA;

    npm run dev

    &#xA;

    I have tried everything above (it´s my own doc to keep with the steps Ive taken so far) and I get the stream to render on my web app but the latency is very high, at least of 5-10 secs, how can i make it be real time or close to that ?

    &#xA;

  • FFmpeg fails to draw text

    6 avril 2024, par Edoardo Balducci

    I've rarely used ffmpeg before, so, sorry If the question is too dumb.&#xA;I have a problem adding a text layer to a video frame using ffmpeg.

    &#xA;

    This is my current code :

    &#xA;

    import subprocess&#xA;from PyQt5.QtGui import QPixmap, QImage&#xA;from PyQt5.QtWidgets import QLabel&#xA;&#xA;class VideoThumbnailLabel(QLabel):&#xA;    def __init__(self, file_path, *args, **kwargs):&#xA;        super().__init__(*args, **kwargs)&#xA;        self.video = video&#xA;        video_duration = self.get_video_duration(file_path)&#xA;        thumbnail_path = self.get_thumbnail(file_path, video_duration)&#xA;        if thumbnail_path:&#xA;            self.setPixmap(QPixmap(thumbnail_path).scaled(160, 90, Qt.KeepAspectRatio))&#xA;        self.setToolTip(f"{video.title}\n{video.description}")&#xA;&#xA;    def get_video_duration(self, video_path):&#xA;        """Returns the duration of the video in seconds."""&#xA;        command = [&#xA;            &#x27;ffprobe&#x27;, &#x27;-v&#x27;, &#x27;error&#x27;, &#x27;-show_entries&#x27;,&#xA;            &#x27;format=duration&#x27;, &#x27;-of&#x27;,&#xA;            &#x27;default=noprint_wrappers=1:nokey=1&#x27;, video_path&#xA;        ]&#xA;        try:&#xA;            result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)&#xA;            if result.returncode != 0:&#xA;                print(f"ffprobe error: {result.stderr}")&#xA;                return 0&#xA;            duration = float(result.stdout)&#xA;            return int(duration)  # Returning duration as an integer for simplicity&#xA;        except Exception as e:&#xA;            print(f"Error getting video duration: {e}")&#xA;            return 0&#xA;&#xA;    def get_thumbnail(self, video_path, duration):&#xA;        """Generates a thumbnail with the video duration overlaid."""&#xA;        output_path = "thumbnail.jpg"  # Temporary thumbnail file&#xA;        duration_str = f"{duration // 3600:02d}:{(duration % 3600) // 60:02d}:{duration % 60:02d}"&#xA;        command = [&#xA;            &#x27;ffmpeg&#x27;, &#x27;-i&#x27;, video_path,&#xA;            &#x27;-ss&#x27;, &#x27;00:00:01&#x27;,  # Time to take the screenshot&#xA;            &#x27;-frames:v&#x27;, &#x27;1&#x27;,  # Number of frames to capture&#xA;            &#x27;-vf&#x27;, f"drawtext=text=&#x27;Duration: {duration_str}&#x27;:x=10:y=10:fontsize=24:fontcolor=white",&#xA;            &#x27;-q:v&#x27;, &#x27;2&#x27;,  # Output quality&#xA;            &#x27;-y&#x27;,  # Overwrite output files without asking&#xA;            output_path&#xA;        ]&#xA;        try:&#xA;            result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)&#xA;            if result.returncode != 0:&#xA;                print(f"ffmpeg error: {result.stderr}")&#xA;                return None&#xA;            return output_path&#xA;        except Exception as e:&#xA;            print(f"Error generating thumbnail with duration: {e}")&#xA;            return None&#xA;

    &#xA;

    and it is used like this :

    &#xA;

    for i, video in enumerate(self.videos):&#xA;    video_widget = VideoThumbnailLabel(video.file)&#xA;    video_widget.mousePressEvent = lambda event, v=video: self.onThumbnailClick(&#xA;        v&#xA;    )&#xA;    self.layout.addWidget(video_widget, i // 3, i % 3)&#xA;

    &#xA;

    I'm facing a problem where I am not able to get the thumbnail if I try to add the duration (I've tested it without the draw filter and worked fine)

    &#xA;

    I get this error (from the result.returncode) that I'm not able to comprehend :

    &#xA;

    ffmpeg error: b"ffmpeg version 6.1.1 Copyright (c) 2000-2023 the FFmpeg developers\n  built with Apple clang version 15.0.0 (clang-1500.1.0.2.5)\n  configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/6.1.1_4 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags=&#x27;-Wl,-ld_classic&#x27; --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libharfbuzz --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopenvino --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox --enable-neon\n  libavutil      58. 29.100 / 58. 29.100\n  libavcodec     60. 31.102 / 60. 31.102\n  libavformat    60. 16.100 / 60. 16.100\n  libavdevice    60.  3.100 / 60.  3.100\n  libavfilter     9. 12.100 /  9. 12.100\n  libswscale      7.  5.100 /  7.  5.100\n  libswresample   4. 12.100 /  4. 12.100\n  libpostproc    57.  3.100 / 57.  3.100\nInput #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;/Users/edoardo/Projects/work/test/BigBuckBunny.mp4&#x27;:\n  Metadata:\n    major_brand     : mp42\n    minor_version   : 0\n    compatible_brands: isomavc1mp42\n    creation_time   : 2010-01-10T08:29:06.000000Z\n  Duration: 00:09:56.47, start: 0.000000, bitrate: 2119 kb/s\n  Stream #0:0[0x1](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 125 kb/s (default)\n    Metadata:\n      creation_time   : 2010-01-10T08:29:06.000000Z\n      handler_name    : (C) 2007 Google Inc. v08.13.2007.\n      vendor_id       : [0][0][0][0]\n  Stream #0:1[0x2](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1280x720 [SAR 1:1 DAR 16:9], 1991 kb/s, 24 fps, 24 tbr, 24k tbn (default)\n    Metadata:\n      creation_time   : 2010-01-10T08:29:06.000000Z\n      handler_name    : (C) 2007 Google Inc. v08.13.2007.\n      vendor_id       : [0][0][0][0]\n[Parsed_drawtext_0 @ 0x60000331cd10] Both text and text file provided. Please provide only one\n[AVFilterGraph @ 0x600002018000] Error initializing filters\n[vost#0:0/mjpeg @ 0x13ce0c7e0] Error initializing a simple filtergraph\nError opening output file thumbnail.jpg.\nError opening output files: Invalid argument\n"&#xA;

    &#xA;

    I've installed both ffmpeg and ffmprobe in my machine :

    &#xA;

    ┌(edoardomacbook-air)-[~/Projects/work/tests-scripts]                                                                                                                                   &#xA;└─ $ ffmpeg -version &amp;&amp; ffprobe -version                                                                                                                                                              2 ⚙ &#xA;ffmpeg version 6.1.1 Copyright (c) 2000-2023 the FFmpeg developers&#xA;built with Apple clang version 15.0.0 (clang-1500.1.0.2.5)&#xA;configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/6.1.1_4 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags=&#x27;-Wl,-ld_classic&#x27; --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libharfbuzz --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopenvino --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox --enable-neon&#xA;libavutil      58. 29.100 / 58. 29.100&#xA;libavcodec     60. 31.102 / 60. 31.102&#xA;libavformat    60. 16.100 / 60. 16.100&#xA;libavdevice    60.  3.100 / 60.  3.100&#xA;libavfilter     9. 12.100 /  9. 12.100&#xA;libswscale      7.  5.100 /  7.  5.100&#xA;libswresample   4. 12.100 /  4. 12.100&#xA;libpostproc    57.  3.100 / 57.  3.100&#xA;ffprobe version 6.1.1 Copyright (c) 2007-2023 the FFmpeg developers&#xA;built with Apple clang version 15.0.0 (clang-1500.1.0.2.5)&#xA;configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/6.1.1_4 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags=&#x27;-Wl,-ld_classic&#x27; --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libharfbuzz --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopenvino --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox --enable-neon&#xA;libavutil      58. 29.100 / 58. 29.100&#xA;libavcodec     60. 31.102 / 60. 31.102&#xA;libavformat    60. 16.100 / 60. 16.100&#xA;libavdevice    60.  3.100 / 60.  3.100&#xA;libavfilter     9. 12.100 /  9. 12.100&#xA;libswscale      7.  5.100 /  7.  5.100&#xA;libswresample   4. 12.100 /  4. 12.100&#xA;libpostproc    57.  3.100 / 57.  3.100&#xA;

    &#xA;

    Does anyone see the problem ?

    &#xA;


    &#xA;

    P.S. : I know that I havent provided a minimal reproducible example, but since I don't know where the problem lies I didn't want to exclude anything

    &#xA;