Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (61)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Supporting all media types

    13 avril 2011, par

    Unlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)

Sur d’autres sites (7487)

  • How to merge video and camera recording together in browser (Chrome especially) ?

    5 juillet 2021, par lzl124631x

    Goal

    


    I want to record/generate a video in browser (Chrome especially) with a custom video (e.g. .mp4, .webm) and camera recording side-by-side.

    


    --------------------------------------------------
|                        |                       |
|  Some Custom Video     |       My Camera       |
|                        |                       |
--------------------------------------------------


    


    What is working

    


    I can use MediaRecorder to record my camera, and play the recording side-by-side with my video, and download the recorded video as a webm.

    


    Challenge

    


    I'm facing difficulty merging the video and camera recording into a single video file side-by-side.

    


    My investigation

    


    MultiStreamMixer

    


    I first looked into MultiStreamMixer and built a demo with it (See codepen).

    


    enter image description here

    


    The issue with it is that it stretches the video content to fit them in the same size. I can specify different width/height for those two streams but it doesn't work as expected. My camera got cropped.

    


    enter image description here

    


    Custom Mixer

    


    I took a look at the source code of MultiStreamMixer and found the issue was because of its simple layout logic. So I took its source code as a reference and build my custom mixer. See codepen.

    


    The way it works :

    


      

    • We first render the streams one by one to an offscreen canvas.
    • 


    • Capture the stream from the canvas as the output video stream
    • 


    • Audio stream is generated separately using AudioContext, createMediaStreamSource, createMediaStreamDestination etc.
    • 


    • Merge the audio and video streams and output as a single stream.
    • 


    • Use MediaRecorder to record the mixed stream.
    • 


    


    It adds black margins to video/camera and won't stretch the videos.

    


    enter image description here

    


    However, I found the recording is very blurry if you wave your hand in front of your camera while recording.

    


    enter image description here

    


    Initially I thought it was because I didn't set some setting correctly to the canvas. But later I found that even the my MultiStreamMixer demo or the WebRTC demo (You can't see the text on the teapot clearly in the recording) generates blurry video with canvas.

    


    I'm asking in webrtc group to see if I can get around this issue. Meanwhile I looked into ffmpeg.js

    


    ffmpeg.js

    


    I think this would "work" but the file is too large. It's impratical to let the customer wait for this 23MB JS file to be downloaded.

    


    Other ways that I haven't tried

    


    The above are my investigations thus far.

    


    Another idea is to play the video and recorded video side-by-side and use screen recording API to record the merged version. (Example). But this would require the customer to wait for the same amount of time as the initial recording to get the screen/tab recorded.

    


    Uploading the video to server and doing the work in server would be my last resort.

    


  • ffmpeg adds black line between stacked videos

    2 septembre 2020, par Adam Gosztolai

    I am using the following command to stack two videos.

    


    ffmpeg -i video_1.mp4 -i video_2.mp4 -filter_complex "[0:v]scale=-1:500,pad='iw+mod(iw\,2)':'ih+mod(ih\,2)'[v0];[v0][1:v]hstack=inputs=2" output.mp4


    


    Not sure if this matters, but video_1.mp4 is static, as I created it from a .png, and is much shorter than video_2.mp4. So when I execute the following command, ffmpeg duplicates frames, indicated by the "More than 1000 frames duplicated" message.

    


    My issue is that the resulting video has a vertical black line between the two videos (between the illustration on the left and the "joint angles" on the right).

    


    enter image description here

    


    This vertical line is not there if I stack video_1.mp4 or video_2.mp4 to itself.

    


    I have no idea what is going on. Could someone help ?

    


  • FFMPEG save last 10 sec before a movement and next 30secs

    2 novembre 2020, par Set1ish

    I have a surveillance camera that process frame by frame live video. In case of movement, I want to save a video containing the last 10 seconds before the movement and next 30 seconds after the movement.
I beleve (may be I'm wrong), that last 10 second + next 30seconds task, should be obtained without decoding-rencoding process.

    


    I try to use python with fmmpeg pipes, creating a reader and a writer, but the reader seams too slow for the stream and I loose some packets (= loose on video quality for saved file).

    


    Here my code

    


    import ffmpeg
import numpy as np

width = 1280
height = 720


process1 = (
    ffmpeg
    .input('rtsp://.....',rtsp_transport='udp', r='10', t="00:00:30")
    .output('pipe:', format='rawvideo', pix_fmt='yuv420p')
    .run_async(pipe_stdout=True)
)

process2 = (
    ffmpeg
    .input('pipe:', format='rawvideo', pix_fmt='yuv420p', s='{}x{}'.format(width, height))
    .output("prova-02-11-2020.avi", pix_fmt='yuv420p',r='10')
    .overwrite_output()
    .run_async(pipe_stdin=True)
)
while True:
    in_bytes = process1.stdout.read(width * height * 3)
    if not in_bytes:
        break
    in_frame = (
        np
        .frombuffer(in_bytes, np.uint8)
        )

    #In future I will save in_frame in a queue
    out_frame = in_frame
   
    process2.stdin.write(
        out_frame
        .astype(np.uint8)
        .tobytes()
    )

process2.stdin.close()
process1.wait()
process2.wait()


    


    If I run

    


    ffmpeg -i rtsp://... -acodec copy -vcodec copy -t "00:00:30" out.avi


    


    It look that decode-rencode process is done in quick/smart way without loosing any packet.
My dream is to make the same on python for the surveillance camera but intergrating with code that analyse the stream.

    


    I would like that the flow for creating the file, does not requires decoding + enconding. The last 10secs frames are in a queue and, at specific event, the contenet of queue plus next 30secs frames are saved into a avi file

    


    I have the constraints to have realtime motion detection on live streaming

    


    Did you have any comments or suggestion ?