Recherche avancée

Médias (1)

Mot : - Tags -/remix

Autres articles (65)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (8582)

  • FFMPEG and Python : Stream a frame into video

    17 août 2023, par Vasilis Lemonidis

    Old approach

    


    I have created a small class for the job. After the streaming of the third frame I get an error from FFMPEG :

    


    pipe:0: Invalid data found when processing input

    


    and then I get a broken pipe.

    


    I have a feeling my ffmpeg input arguments are incorrect, I have little experience with the tool. Here is the code :

    


    import subprocess
import os
import cv2
import shutil
class VideoUpdater:
    def __init__(self, video_path: str, framerate: int):
        assert video_path.endswith(".flv")
        self._ps = None

        self.video_path = video_path
        self.framerate = framerate
        self._video = None
        self.curr_frame = None
        if os.path.isfile(self.video_path):
            shutil.copyobj(self.video_path, self.video_path + ".old")
            cap = cv2.VideoCapture(self.video_path + ".old")
            while cap.isOpened():
                ret, self.curr_frame = cap.read()
                if not ret:
                    break
                if len(self.curr_frame.shape) == 2:
                    self.curr_frame = cv2.cvtColor(self.curr_frame, cv2.COLOR_GRAY2RGB)
                self.ps.stdin.write(self.curr_frame.tobytes())

    @property
    def ps(self) -> subprocess.Popen:
        if self._ps is None:
            framesize = self.curr_frame.shape[0] * self.curr_frame.shape[1] * 3 * 8
            self._ps = subprocess.Popen(
                f"ffmpeg  -i pipe:0 -vcodec mpeg4 -s qcif -frame_size {framesize} -y {self.video_path}",
                shell=True,
                stdin=subprocess.PIPE,
                stdout=sys.stdout,
            )
        return self._ps

    def update(self, frame: np.ndarray):
        if len(frame.shape) == 2:
            frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2RGB)
        self.curr_frame = frame
        self.ps.stdin.write(frame.tobytes())


    


    and here is a script I use to test it :

    


        import os
    import numpy as np
    import cv2

    size = (300, 300, 3)
    img_array = [np.random.randint(255, size=size, dtype=np.uint8) for c in range(50)]

    tmp_path = "tmp.flv"
    tmp_path = str(tmp_path)
    out = VideoUpdater(tmp_path, 1)

    for i in range(len(img_array)):
        out.update(img_array[i])


    


    Update closer to what I want

    


    Having further studied how ffmpeg internals work, I went for an approach without pipes, where a video of a frame is made and appended to the .ts file at every update :

    


    import tmpfile
import cv2
from tempfile import NamedTemporaryFile
import subprocess
import shutil
import os


class VideoUpdater:
    def __init__(self, video_path: str, framerate: int):
        if not video_path.endswith(".mp4"):
            LOGGER.warning(
                f"File type {os.path.splitext(video_path)[1]} not supported for streaming, switching to ts"
            )
            video_path = os.path.splitext(video_path)[0] + ".mp4"

        self._ps = None
        self.env = {
        }
        self.ffmpeg = "ffmpeg "
        self.video_path = video_path
        self.ts_path = video_path.replace(".mp4", ".ts")
        self.tfile = None
        self.framerate = framerate
        self._video = None
        self.last_frame = None
        self.curr_frame = None

    def update(self, frame: np.ndarray):
        if len(frame.shape) == 2:
            frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGR)
        else:
            frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
        self.writeFrame(frame)

    def writeFrame(self, frame: np.ndarray):
        tImLFrame = NamedTemporaryFile(suffix=".png")
        tVidLFrame = NamedTemporaryFile(suffix=".ts")

        cv2.imwrite(tImLFrame.name, frame)
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-loop 1 -r {self.framerate} -i {tImLFrame.name} -t {self.framerate} -vcodec libx264 -pix_fmt yuv420p -y {tVidLFrame.name}",
            env=self.env,
            shell=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        ps.communicate()
        if os.path.isfile(self.ts_path):
            # this does not work to watch, as timestamps are not updated
            ps = subprocess.Popen(
                self.ffmpeg
                + rf'-i "concat:{self.ts_path}|{tVidLFrame.name}" -c copy -y {self.ts_path.replace(".ts", ".bak.ts")}',
                env=self.env,
                shell=True,
                stdout=subprocess.PIPE,
                stderr=subprocess.PIPE,
            )
            ps.communicate()
            shutil.move(self.ts_path.replace(".ts", ".bak.ts"), self.ts_path)

        else:
            shutil.copyfile(tVidLFrame.name, self.ts_path)
        # fixing timestamps, we dont have to wait for this operation
        ps = subprocess.Popen(
            self.ffmpeg
            + rf"-i {self.ts_path} -fflags +genpts -c copy -y {self.video_path}",
            env=self.env,
            shell=True,
            stdout=subprocess.PIPE,
            stderr=subprocess.PIPE,
        )
        tImLFrame.close()
        tVidLFrame.close()



    


    As you may notice, a timestamp correction needs to be performed. By reading the final mp4 file, I saw however that it consistently has 3 fewer frames than the ts file, the first 3 frames are missing. Does anyone have an idea why this is happening

    


  • How to stream H.264 bitstream to browser

    21 janvier 2019, par BobtheMagicMoose

    This is a followup to https://raspberrypi.stackexchange.com/questions/93254/stream-usb-webcam-with-audio?noredirect=1#comment150507_93254

    I, like many other brave tinkerers before me, thought it would be a simple task to take an old USB camera (c920) can pair it with a raspberry pi to make a network streaming device (e.g., baby monitor). As those that have gone before me, I have now realized (after two days of tearing my hair out), that this is an extremely complicated task.

    Problem statement : I have a raspberry pi zero and a c920 webcam. I want to use the H.264 bitstream from the webcam and serve it on the pi without transcoding it (the feeble processor would really struggle). I want to combine the video stream with its audio and send it over to a browser (phone, tablet, pc - something HTML5 without NAPI).

    My current strategy is to do the following :

    ffmpeg -re -f s16le -i /dev/zero -f v4l2 -thread_queue_size 512 -codec:v h264 -s 1920x1080 -i /dev/video0 -codec:v copy -acodec aac -ab 128k -g 50 http://localhost:8090/camera.ffm (this is with dummy audio - I figured I would add audio later)

    Followed by sudo ffserver -d -f /etc/ffserver.conf to received the feed and broadcast it as a stream. This is the ffserver.conf file :

    `HTTPPort 8090
    HTTPBindAddress 0.0.0.0
    MaxHTTPConnections 2000
    MaxClients 1000
    MaxBandwidth 100000
    CustomLog -
    <feed>
     File /tmp/streamwebm.ffm
     FileMaxSize 50M
     ACL allow localhost
     ACL allow 128.199.149.46
     #ACL allow 127.0.0.1
     ACL allow 192.168.0.0 192.168.0.255
    </feed>
    <stream stream="stream">
    Format webm

    # Video Settings
    VideoFrameRate 30
    VideoSize 1920x1080

    # Audio settings
    AudioCodec libvorbis
    AudioSampleRate 48000
    AVOptionAudio flags +global_header

    MaxTime 0
    AVOptionVideo me_range 16
    AVOptionVideo qdiff 4
    AVOptionVideo qmin 4
    AVOptionVideo qmax 40
    #AVOptionVideo good
    AVOptionVideo flags +global_header

    # Streaming settings
    PreRoll 10
    StartSendOnKey

    Metadata author "author"
    Metadata copyright "copyright"
    Metadata title "Web app name"
    Metadata comment "comment"
    </stream>

    My basic html is<video>  <source src="http://localhost:8090/stream"> </source></video>

    The stream however, doesn’t work (the browser won’t connect) and I get the following :
    enter image description here

    And the browser on the client says (failed) NET::ERR_CONNECTION_REFUSED

    Thoughts :
    - Begin stream simple mp4 with ffserver explains that ffserver can’t stream .mp4 because of headers or something. This is why I am using webm (which doesn’t support h.264 I believe and is causing the really slow performance converting to vp9). I’m not concerned about CPU usage at the moment, just want to get an image to appear on the browser !

    • I hear one issue deals with ’chunking’ - that the camera h.264 is a bitstream but h.264 streams for html5 should be chunked. Not sure how that would work.

    • I have tried VLC for some things (RTP) but haven’t have success.

    • Most resources (SE and other sites) are from 2010-2015 and it seems as thought v4l2 and other things have developed since then.

    • As my problem is most likely general ignorance of the subject matter, I would appreciate any answers that provide some general understanding as to the theory behind different techniques. I know this makes the question more of a call for opinion and less appropriate for SE, but I’m fixing to throw my computer out the window (you know the feeling).

    Thank you !

  • Encoding a growing video file in realtime fails prematurely

    17 janvier 2023, par Macster

    This batch script is repeatedly concatenating video clips from an textfile. The output file is then beeing encoded in realtime into dash format. Unfortunately the realtime encoding will always end prematurely and I can't figure out why. From what I observed, it shouldn't be possible that the realtime encoding would catch up to the concating - which is happening each time after the duration of the clip that was just added - because I'm setting an offset, to when the encoding has to start, via timeout.

    &#xA;

    I've tried other formats like .mp4 and .h264 and other options, but nothing seems to help. So my assumption is, that there is a conflict when read/write operation is made and these operations overlap at a certain point. But how do I find out when and how to avoid it ? I haven't had the feeling that something was happening at the exact same time, when observing the command promt.

    &#xA;

    Command prompt&#xA;The screenshot was taken right at failing. As you can see, the concat file queue1.webm is already more than 10 seconds longer than the realtime encoding at its failing position. That's why I don't think it has to do with catching up too fast. It will fail randomly, so one time it fails at 25 seconds and next time it might fail at 2 minutes and 20 seconds.

    &#xA;

    To avoid the possibility of different video settings causing troubble, I'm using only one video file. I will link it here : BigBuckBunny (Mega NZ) It's a 10 sec snippet from BigBuckBunny. I hope this is legal !? But you can use what ever clip you want.

    &#xA;

    IMPORTANT : If you try to reproduce the behaviour, please make sure you make at least one entry,
    like file &#x27;bigbuckbunny_webm.webm&#x27; in mylist.txt, because adding something if the file is empty is kinda broken :)

    &#xA;

    So here is the code :

    &#xA;

    Just the FFMPEG commands :

    &#xA;

    ffmpeg -f concat -i "mylist.txt" -safe 0  -c copy -f webm -reset_timestamps 1 -streaming 1 -live 1 -y queue1.webm&#xA;[..]&#xA;ffmpeg -re -i queue1.webm -c copy -map 0:v -use_timeline 1 -use_template 1 -remove_at_exit 0 -window_size 10 -adaptation_sets "id=0,streams=v" -streaming 1 -live 1 -f dash -y queue.mpd&#xA;

    &#xA;

    makedir.bat

    &#xA;

    @ECHO on&#xA;&#xA;:: Create new queue&#xA;IF NOT EXIST "queue1.webm" mkfifo "queue1.webm"&#xA;&#xA;setlocal EnableDelayedExpansion&#xA;&#xA;set string=file &#x27;bigbuckbunny_webm.webm&#x27;&#xA;set video_path=""&#xA;SET /a c=0&#xA;set file=-1&#xA;set file_before=""&#xA;&#xA;:loop&#xA;::Get last entry from "mylist.txt"&#xA;for /f "delims=" %%a in (&#x27;type mylist.txt ^| findstr /b /c:"file"&#x27;) do (&#xA;  set video_path=%%a&#xA;)&#xA;echo %video_path%&#xA;&#xA;::Insert file &#x27;bigbuckbunny_webm.webm&#x27; if mylist.txt is empty.&#xA;if "%video_path%" EQU """" (echo %string% >> mylist.txt &amp;&amp; set file=%string:~6,-1%) else (set file=%video_path:~6,-1%)&#xA;&#xA;::Insert file &#x27;bigbuckbunny_webm.webm&#x27; into mylit.txt if actual entry(%file%) is the same than before(file &#x27;bigbuckbunny_webm.webm&#x27;).&#xA;if "%file%" EQU "%file_before%" (echo. >> mylist.txt &amp;&amp; echo %string%>>mylist.txt) &#xA;&#xA;echo %file%&#xA;&#xA;::Get the video duration&#xA;for /f "tokens=1* delims=:" %%a in (&#x27;ffmpeg -i %file% 2^>^&amp;1 ^| findstr "Duration"&#x27;) do (set duration=%%b)&#xA;echo %duration%&#xA;&#xA;::Crop format to HH:MM:SS&#xA;set duration=%duration:~1,11%&#xA;echo %duration%&#xA;&#xA;::Check if seconds are double digits, less than 10, like 09. Then use only 9.&#xA;if %duration:~6,1% EQU 0 (&#xA;  set /a sec=%duration:~7,1% &#xA;    ) else ( &#xA;        set /a sec=%duration:~6,2%&#xA;&#xA;)&#xA;echo %sec%&#xA;&#xA;::Convert duration into seconds&#xA;set /a duration=%duration:~0,2%*3600&#x2B;%duration:~3,2%*60&#x2B;%sec%&#xA;echo %duration%&#xA;&#xA;::echo %duration%&#xA;&#xA;::Increase iteration count.&#xA;set /a c=c&#x2B;1&#xA;&#xA;::Add new clip to queue.&#xA;ffmpeg -f concat -i "mylist.txt" -safe 0  -c copy -f webm -reset_timestamps 1 -streaming 1 -live 1 -y queue1.webm&#xA;&#xA;::Start realtime encoding queue1, if a first clip was added.&#xA;if !c! EQU 1 (&#xA;  start cmd /k "startRealtimeEncoding.bat"&#xA;)&#xA;&#xA;::Wait the duration of the inserted video &#xA;timeout /t %duration%&#xA;&#xA;::Set the actual filename as the previous file for the next iteration.&#xA;set file_before=%file%&#xA;&#xA;::Stop after c loops.&#xA;if !c! NEQ 20 goto loop&#xA;&#xA;echo %c%&#xA;&#xA;endlocal&#xA;&#xA;:end  &#xA;

    &#xA;

    startRealtimeEncoding.bat

    &#xA;

    @ECHO off&#xA;&#xA;timeout /t 5&#xA;ffmpeg -re -i queue1.webm -c copy -map 0:v -seg_duration 2 -keyint_min 48 -use_timeline 1 -use_template 1 -remove_at_exit 0 -window_size 10 -adaptation_sets "id=0,streams=v" -streaming 1 -live 1 -f dash -y queue.mpd&#xA;&#xA;:end&#xA;

    &#xA;