Recherche avancée

Médias (0)

Mot : - Tags -/clipboard

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (21)

  • Les images

    15 mai 2013
  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

Sur d’autres sites (5738)

  • Streaming raw h264 video from Raspberry PI to server for capture and viewing [closed]

    24 juin 2024, par tbullers

    This is really an optimization question - I have been able to stream h264 from a raspberry pi 5 to a linux system and capture the streams and save them to .mp4 files.

    


    But I intend to run the video capture and sending on a battery powered Pi Zero 2 W and want to use the least amount of power to maximize battery life and still providing good video quality.

    


    I've explored many different configuration settings but am getting lost in all the options.

    


    This is what I run on the pi :

    


    rpicam-vid -t 30s --framerate 30 --hdr --inline --listen -o tcp://0.0.0.0:5000


    


    I retrieve this video from the more powerful Ubuntu server with :

    


    ffmpeg -r 30 -i tcp://ralph:5000 -vcodec copy video_out103.mp4


    


    It generally works but I receive lots of errors on the server side like this :

    


    [mp4 @ 0x5f9aab5d0800] pts has no valuee= 975.4kbits/s speed=1.19x
Last message repeated 15 times
[mp4 @ 0x5f9aab5d0800] pts has no valuee=1035.3kbits/s speed=1.19x
Last message repeated 15 times
[mp4 @ 0x5f9aab5d0800] pts has no valuee=1014.8kbits/s speed=1.18x
Last message repeated 9 times
[mp4 @ 0x5f9aab5d0800] pts has no valuee=1001.1kbits/s speed=1.17x
Last message repeated 7 times
[mp4 @ 0x5f9aab5d0800] pts has no value
Last message repeated 1 times
[out#0/mp4 @ 0x5f9aab5ad5c0] video:3546kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead : 0.120360%
size= 3550kB time=00:00:27.50 bitrate=1057.5kbits/s speed=1.18x

    


    Any suggestions on how to correct these errors ?

    


    Also any suggestions on how to make the video capture side more efficient ? Should I use a different codec ? (yuv instead of h264 ?) Would using UDP decrease overhead ? Can I improve video quality with the mode or hdr options ? What does denoise do ?

    


    With all the options available with these tools I think it's unlikely that I have a well thought out approach to capture and streaming. I'm hoping that people who are more familiar with this space might be able to provide some suggestions.

    


    Thank you !

    


    -tom

    


  • FFMPEG eating up ram in railway deployment [flask app]

    23 juin 2024, par Eshan Das

    I created a flask app , meme generator, hosting it in railway using gunicorn, i am suspecting ffmpeg is eating the most ram.....
so what i am doing is , generating a tts, and a text adding into one of the parts of the video , and then finally combining all together

    


    from flask import Flask, request, jsonify, send_file, url_for&#xA;from moviepy.editor import VideoFileClip, ImageClip, CompositeVideoClip, AudioFileClip, concatenate_videoclips&#xA;from pydub import AudioSegment&#xA;from PIL import Image, ImageDraw, ImageFont&#xA;from gtts import gTTS&#xA;from flask_cors import CORS&#xA;import os&#xA;&#xA;app = Flask(__name__)&#xA;CORS(app)&#xA;app.config[&#x27;UPLOAD_FOLDER&#x27;] = &#x27;uploads&#x27;&#xA;&#xA;def generate_video(name, profile_image_path, song_path, start_time):&#xA;    first_video = VideoFileClip("first.mp4")&#xA;    second_video = VideoFileClip("second.mp4")&#xA;    third_video = VideoFileClip("third.mp4")&#xA;&#xA;    #font_path = os.path.join("fonts", "arial.ttf")  # Updated font path&#xA;    font_size = 70&#xA;    font = ImageFont.load_default()&#xA;    text = name&#xA;    image_size = (second_video.w, second_video.h)&#xA;    text_image = Image.new(&#x27;RGBA&#x27;, image_size, (0, 0, 0, 0))&#xA;    draw = ImageDraw.Draw(text_image)&#xA;    text_width, text_height = draw.textsize(text, font=font)&#xA;    text_position = ((image_size[0] - text_width) // 2, (image_size[1] - text_height) // 2)&#xA;    draw.text(text_position, text, font=font, fill="black")&#xA;&#xA;    text_image_path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], f"text_{name}.png")&#xA;    text_image.save(text_image_path)&#xA;&#xA;    txt_clip = ImageClip(text_image_path, duration=second_video.duration)&#xA;&#xA;    tts = gTTS(text=name, lang=&#x27;en&#x27;)&#xA;    audio_path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], f"audio_{name}.wav")&#xA;    tts.save(audio_path)&#xA;&#xA;    sound = AudioSegment.from_file(audio_path)&#xA;    chipmunk_sound = sound._spawn(sound.raw_data, overrides={&#xA;        "frame_rate": int(sound.frame_rate * 1.5)&#xA;    }).set_frame_rate(sound.frame_rate)&#xA;&#xA;    chipmunk_audio_path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], f"chipmunk_audio_{name}.wav")&#xA;    chipmunk_sound.export(chipmunk_audio_path, format="wav")&#xA;&#xA;    audio_clip_second = AudioFileClip(chipmunk_audio_path)&#xA;&#xA;    second_video = CompositeVideoClip([second_video, txt_clip.set_position((45, 170))])&#xA;    second_video = second_video.set_audio(audio_clip_second)&#xA;&#xA;    song = AudioSegment.from_file(song_path)&#xA;    start_ms = start_time * 1000&#xA;    cropped_song = song[start_ms:start_ms &#x2B; 20000]&#xA;&#xA;    chipmunk_song = cropped_song._spawn(cropped_song.raw_data, overrides={&#xA;        "frame_rate": int(cropped_song.frame_rate * 1.5)&#xA;    }).set_frame_rate(cropped_song.frame_rate)&#xA;&#xA;    chipmunk_song_path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], f"chipmunk_song_{name}.wav")&#xA;    chipmunk_song.export(chipmunk_song_path, format="wav")&#xA;&#xA;    audio_clip_third = AudioFileClip(chipmunk_song_path)&#xA;    third_video = third_video.set_audio(audio_clip_third)&#xA;&#xA;    profile_image = ImageClip(profile_image_path).set_duration(first_video.duration).resize(height=first_video.h / 8).set_position((950, 500))&#xA;    first_video = CompositeVideoClip([first_video, profile_image])&#xA;    &#xA;    profile_image = ImageClip(profile_image_path).set_duration(second_video.duration).resize(height=second_video.h / 8).set_position((950, 500))&#xA;    second_video = CompositeVideoClip([second_video, profile_image])&#xA;    &#xA;    profile_image = ImageClip(profile_image_path).set_duration(third_video.duration).resize(height=third_video.h / 8).set_position((950, 500))&#xA;    third_video = CompositeVideoClip([third_video, profile_image])&#xA;&#xA;    final_video = concatenate_videoclips([first_video, second_video, third_video])&#xA;    final_video = final_video.subclip(0, 10)&#xA;&#xA;    output_path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], f"output_{name}.mp4")&#xA;    final_video.write_videofile(output_path, codec="libx264", audio_codec=&#x27;aac&#x27;)&#xA;&#xA;    final_video.close()&#xA;    first_video.close()&#xA;    second_video.close()&#xA;    third_video.close()&#xA;    audio_clip_second.close()&#xA;    audio_clip_third.close()&#xA;&#xA;    os.remove(audio_path)&#xA;    os.remove(text_image_path)&#xA;    os.remove(chipmunk_song_path)&#xA;    os.remove(chipmunk_audio_path)&#xA;    &#xA;    return output_path&#xA;&#xA;@app.route(&#x27;/generate&#x27;, methods=[&#x27;POST&#x27;])&#xA;async def generate():&#xA;    if not os.path.exists(app.config[&#x27;UPLOAD_FOLDER&#x27;]):&#xA;        os.makedirs(app.config[&#x27;UPLOAD_FOLDER&#x27;])&#xA;&#xA;    name = request.form[&#x27;name&#x27;]&#xA;    start_time = float(request.form[&#x27;start_time&#x27;])&#xA;&#xA;    profile_image = request.files[&#x27;profile_image&#x27;]&#xA;    profile_image_path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], profile_image.filename)&#xA;    profile_image.save(profile_image_path)&#xA;&#xA;    song = request.files[&#x27;song&#x27;]&#xA;    song_path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], song.filename)&#xA;    song.save(song_path)&#xA;&#xA;    video_path = generate_video(name, profile_image_path, song_path, start_time)&#xA;    return jsonify({"video_url": url_for(&#x27;uploaded_file&#x27;, filename=os.path.basename(video_path))})&#xA;&#xA;@app.route(&#x27;/uploads/<filename>&#x27;)&#xA;def uploaded_file(filename):&#xA;    path = os.path.join(app.config[&#x27;UPLOAD_FOLDER&#x27;], filename)&#xA;    return send_file(path, as_attachment=True)&#xA;&#xA;if __name__ == "__main__":&#xA;    if not os.path.exists(app.config[&#x27;UPLOAD_FOLDER&#x27;]):&#xA;        os.makedirs(app.config[&#x27;UPLOAD_FOLDER&#x27;])&#xA;    app.run(host=&#x27;0.0.0.0&#x27;, port=5000)&#xA;</filename>

    &#xA;

    I am trying to solve this issue, any way to optimize my code , or any other alternative of ffmpeg which i can use , to reduce the ram consumption

    &#xA;

  • FFmpeg "Non-monotonous DTS in output stream" error when processing video from Safari's MediaRecorder

    17 juillet 2024, par Hackermon

    I'm recording a video stream in Safari with MediaRecorder, then sending it to a remote server which then uses ffmpeg to reencode the video. When reencoding with FFmpeg, I get a lot of warnings and the final video is broken, frame are glitching and out of sync but the audio sounds fine.

    &#xA;

    Here's my MediaRecorder script :

    &#xA;

    const camera = await navigator.mediaDevices.getUserMedia({ audio: true, video: true });&#xA;const recorder = new MediaRecorder(camera, {&#xA;        mimeType: &#x27;video/mp4&#x27;, // Safari only supports MP4&#xA;        bitsPerSecond: 1_000_000,&#xA;});&#xA;&#xA;recorder.ondataavailable = async ({ data: blob }) => {&#xA;        // open contents in new tab&#xA;        var fileURL = URL.createObjectURL(file);&#xA;         window.open(fileURL, &#x27;_blank&#x27;);&#xA;};&#xA;&#xA;recorder.start();&#xA;setTimeout(() => recorder.stop(), 5000);&#xA;

    &#xA;

    I download the video blob from Safari and use this command to reencode it :

    &#xA;

    ffmpeg -i ./blob.mp4 -preset ultrafast -strict -2 -threads 10 -c copy ./output.mp4&#xA;

    &#xA;

    Logs :

    &#xA;

    ffmpeg version 4.2.7-0ubuntu0.1 Copyright (c) 2000-2022 the FFmpeg developers&#xA;  built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)&#xA;  configuration: --prefix=/usr --extra-version=0ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared&#xA;  libavutil      56. 31.100 / 56. 31.100&#xA;  libavcodec     58. 54.100 / 58. 54.100&#xA;  libavformat    58. 29.100 / 58. 29.100&#xA;  libavdevice    58.  8.100 / 58.  8.100&#xA;  libavfilter     7. 57.100 /  7. 57.100&#xA;  libavresample   4.  0.  0 /  4.  0.  0&#xA;  libswscale      5.  5.100 /  5.  5.100&#xA;  libswresample   3.  5.100 /  3.  5.100&#xA;  libpostproc    55.  5.100 / 55.  5.100&#xA;[mov,mp4,m4a,3gp,3g2,mj2 @ 0x559826616000] DTS 29 &lt; 313 out of order&#xA;Input #0, mov,mp4,m4a,3gp,3g2,mj2, from &#x27;./chunk1.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : iso5&#xA;    minor_version   : 1&#xA;    compatible_brands: isomiso5hlsf&#xA;    creation_time   : 2024-07-17T14:30:47.000000Z&#xA;  Duration: 00:00:01.00, start: 0.000000, bitrate: 3937 kb/s&#xA;    Stream #0:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p(progressive), 640x480 [SAR 1:1 DAR 4:3], 6218 kb/s, 33.36 fps, 600 tbr, 600 tbn, 1200 tbc (default)&#xA;    Metadata:&#xA;      creation_time   : 2024-07-17T14:30:47.000000Z&#xA;      handler_name    : Core Media Video&#xA;    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 420 kb/s (default)&#xA;    Metadata:&#xA;      creation_time   : 2024-07-17T14:30:47.000000Z&#xA;      handler_name    : Core Media Audio&#xA;File &#x27;./safari3.mp4&#x27; already exists. Overwrite ? [y/N] Output #0, mp4, to &#x27;./safari3.mp4&#x27;:&#xA;  Metadata:&#xA;    major_brand     : iso5&#xA;    minor_version   : 1&#xA;    compatible_brands: isomiso5hlsf&#xA;    encoder         : Lavf58.29.100&#xA;    Stream #0:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p(progressive), 640x480 [SAR 1:1 DAR 4:3], q=2-31, 6218 kb/s, 33.36 fps, 600 tbr, 19200 tbn, 600 tbc (default)&#xA;    Metadata:&#xA;      creation_time   : 2024-07-17T14:30:47.000000Z&#xA;      handler_name    : Core Media Video&#xA;    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 420 kb/s (default)&#xA;    Metadata:&#xA;      creation_time   : 2024-07-17T14:30:47.000000Z&#xA;      handler_name    : Core Media Audio&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (copy)&#xA;  Stream #0:1 -> #0:1 (copy)&#xA;Press [q] to stop, [?] for help&#xA;[mp4 @ 0x559826644340] Non-monotonous DTS in output stream 0:0; previous: 10016, current: 928; changing to 10017. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x559826644340] Non-monotonous DTS in output stream 0:0; previous: 10017, current: 1568; changing to 10018. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x559826644340] Non-monotonous DTS in output stream 0:0; previous: 10018, current: 2208; changing to 10019. This may result in incorrect timestamps in the output file.&#xA;...x100&#xA;frame=  126 fps=0.0 q=-1.0 Lsize=     479kB time=00:00:00.97 bitrate=4026.2kbits/s speed= 130x    &#xA;video:425kB audio:51kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.652696%&#xA;

    &#xA;

    Not sure what's happening or how to fix it. This issue only happens in Safari, videos from Chrome are perfectly fine.

    &#xA;

    I've tried various flags :

    &#xA;

    -fflags &#x2B;igndts&#xA;-bsf:a aac_adtstoasc&#xA;-c:v libvpx-vp9 -c:a libopus&#xA;etc&#xA;

    &#xA;

    None of them seem to fix the issue.

    &#xA;