Recherche avancée

Médias (2)

Mot : - Tags -/plugins

Autres articles (100)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

Sur d’autres sites (8563)

  • ffmpeg - Stream webcam - RTP h264 + audio

    17 mai 2016, par Hossein

    I am trying to create a rtp stream using ffmpeg. I am taking input from my logitech C920 which has built in h264 encoding support and also has a microphone. I wanted to send both video(h264 either with the built in encoder or ffmpeg’s encoder) and audio(any encoding) through RTP and then play the stream using ffplay.

    So far I am able to send only the video with the following command :

    ffmpeg -i /dev/video0 -r 24 -video_size 320x240 -c:v libx264 -f  rtp rtp://127.0.0.1:9999

    and also the audio separately using the command :

    ffmpeg  -f alsa -i plughw:CARD=C920,DEV=0 -acodec libmp3lame -t 20 -f  rtp rtp://127.0.0.1:9998

    and play the sdp file with :

    ffplay -i -protocol_whitelist file,udp,rtp test3.sdp
    ffplay -i -protocol_whitelist file,udp,rtp test4.sdp

    I’m on Ubuntu 14.04

    How can I play the two streams with a single ffplay command as ffplay cannot take two inputs and I can’t send two streams using a single RTP stream(or can I ?).
    Also, how can I use the built in h264 encoder of my webcam ?

    Thank you !

  • MoviePY write_videofile using GPU for faster encoding [closed]

    5 mai, par kaushal

    I'm creating a video from scratch using moviePY. I am generating all the required frames, adding required audio (including a voiceover and background music), adding a logo and finally writing the video file in 4K.

    


    Everything works fine, except the write_videofile takes a lot of time.

    


    I have read many related posts which mentions using the right codec etc. I have NVidia card, so tried both h264_nvenc and hevc_nvenc. Quality of the output video dropped with the first one, so I'm sticking to hevc_nvenc. I'm using below line to write the file.

    


            video_clip.write_videofile(targetfile, codec="hevc_nvenc", threads=32, fps=24)


    


    What I have noticed is, it does seem to be using the gpu, but very little. Compared to this, when I run stable diffusion or vegas rendering, it uses gpu a lot more.

    


    That's why I think there is definitely some scope of improvement here. If you see below screenshot, when the video file write starts, the gpu utilisation increases a tiny bit, but it can take a lot more I think, isn't it ?

    


    I can try various parameters that I've seen in other threads, like logger=None, progress_bar = False, ffmpeg_params=['-b:v','10000k'] etc., but they are not going to improve gpu utilisation in any shape or form. I've been wondering what am I missing.

    


    Any ideas or suggestions please ?

    


    enter image description here

    


  • Merging video and audio stream, where audio drifts

    22 avril 2015, par TL_IPD

    I want to record audio and video with my raspberry pi b+ 2.
    I tried to accomplish this with one ffmpeg command but this is to slow. and i could not get it working correctly

    I have a raspberry pi camera module and a Cirrus audio card. On the raspberry i have compiled a new kernel with support for the audio card. I also compiled ffmpeg on the raspberr with alsa support

    ~$ ffmpeg
    ffmpeg version N-71470-g2db24cf Copyright (c) 2000-2015 the FFmpeg developers
    built with gcc 4.6 (Debian 4.6.3-14+rpi1)
    configuration: --arch=armel --target-os=linux --enable-gpl --extra-libs=-lasound --enable-nonfree
     libavutil      54. 22.101 / 54. 22.101
     libavcodec     56. 34.100 / 56. 34.100
     libavformat    56. 30.100 / 56. 30.100
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 14.100 /  5. 14.100
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Hyper fast Audio and Video encoder
    usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...

    Now i try to record an audio stream and a video stream ’at the same time’
    I do this my running a shell script

    raspivid -t 60000 -vs -w 1280 -h 720 -b 5000000 -fps 25 -o video.h264 &
    arecord -Dhw:sndrpiwsp -r 44100 -c 2 -d 60 -f S32_LE audio.aac

    i also tried with -r 22050 and -f S16_LE

    when running this it sometimes gives an (i think)

    overrun!!! (at least 1038.725 ms long)

    at the end of the script i have two files. a video and a audio file.

    now i want to merge those two together by using ffmpeg

    ffmpeg -i video.h264 -i audio.aac -c:v copy -c:a aac -strict experimental output.mp4

    this gives the output :

    ffmpeg version N-71470-g2db24cf Copyright (c) 2000-2015 the FFmpeg developers
    built with gcc 4.6 (Debian 4.6.3-14+rpi1)
    configuration: --arch=armel --target-os=linux --enable-gpl --extra-libs=-lasound --enable-nonfree
     libavutil      54. 22.101 / 54. 22.101
     libavcodec     56. 34.100 / 56. 34.100
     libavformat    56. 30.100 / 56. 30.100
     libavdevice    56.  4.100 / 56.  4.100
     libavfilter     5. 14.100 /  5. 14.100
     libswscale      3.  1.101 /  3.  1.101
     libswresample   1.  1.100 /  1.  1.100
     libpostproc    53.  3.100 / 53.  3.100
    Input #0, h264, from 'video_1min_3.h264':
     Duration: N/A, bitrate: N/A
       Stream #0:0: Video: h264 (High), yuv420p, 1280x720, 25 fps, 25 tbr, 1200k tbn, 50 tbc
    Guessed Channel Layout for  Input Stream #1.0 : stereo
    Input #1, wav, from 'audio_1min_3.aac':
     Duration: 00:01:00.00, bitrate: 705 kb/s
       Stream #1:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 22050 Hz, 2 channels, s16, 705 kb/s
    [mp4 @ 0x3230f20] Codec for stream 0 does not use global headers but container format requires global headers
    Output #0, mp4, to 'output_1min_3.mp4':
     Metadata:
       encoder         : Lavf56.30.100
       Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuv420p, 1280x720, q=2-31, 25 fps, 25 tbr, 1200k tbn, 1200k tbc
       Stream #0:1: Audio: aac ([64][0][0][0] / 0x0040), 22050 Hz, stereo, fltp, 128 kb/s
       Metadata:
         encoder         : Lavc56.34.100 aac
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
     Stream #1:0 -> #0:1 (pcm_s16le (native) -> aac (native))
    Press [q] to stop, [?] for help
    frame= 1822 fps=310 q=-1.0 Lsize=   33269kB time=00:01:12.84 bitrate=3741.7kbits/s
    video:32300kB audio:941kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.086073%

    so finally i have a file output.mp4 that is a movie with audio that is in sync at the beginning but drifts away to a difference of about 4 seconds. where the audio is ahead of the video.

    I hope you can help me trying to solve this issue so the audio does not drift away anymore.

    Thanks in advance

    ( i tried to be as clear as possible )