Recherche avancée

Médias (91)

Autres articles (47)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • Librairies et logiciels spécifiques aux médias

    10 décembre 2010, par

    Pour un fonctionnement correct et optimal, plusieurs choses sont à prendre en considération.
    Il est important, après avoir installé apache2, mysql et php5, d’installer d’autres logiciels nécessaires dont les installations sont décrites dans les liens afférants. Un ensemble de librairies multimedias (x264, libtheora, libvpx) utilisées pour l’encodage et le décodage des vidéos et sons afin de supporter le plus grand nombre de fichiers possibles. Cf. : ce tutoriel ; FFMpeg avec le maximum de décodeurs et (...)

Sur d’autres sites (9437)

  • How to convert an MJPEG stream to YUV420p, then hardware encode to h264 on rpi4 using go2rtc and frigate ? [closed]

    13 juin, par Josh Pirihi

    I am putting together a dashcam/dvr/reversing camera system for my van. I am using some analogue HD reversing cameras and AHD to USB dongles along with a Raspberry Pi 4. The pi is running frigate in docker, it has a fresh Raspberry Pi OS installed. The AHD dongles show up straight away as /dev/video0 when plugged in.

    


    I am running into an issue getting the MJPEG stream from the dongle to be accepted by the hardware h264 encoder. I am able to feed the hardware encoder with the raw YUYV 4:2:2 stream, however due to bandwidth limitations this cuts the framerate intolerably low (720p 10fps, 1080p 5fps). Similarly, I am able to use the software encoder to convert the MJPEG stream at 30fps, however this uses 200% CPU per camera so it is no good for when I add more than one camera (at least 2 in total, maybe more).

    


    I have played around with frigate, and have reduced it back to just the go2rtc docker container to troubleshoot until I get it working.

    


    Here is the output from the go2rtc FFMPEG Devices (USB) tab :

    


    go2rtc FFMPEG Devices tab

    


    The basic go2rtc config gives me 10fps 720p using the hardware encoder. This ingests the raw stream I think :

    


    streams:
    grill:
      - "ffmpeg:device?video=0&video_size=1280x720#video=h264#hardware"


    


    Telling it to use MJPEG results in an error :

    


    streams:
    grill:
      - "ffmpeg:device?video=0&input_format=mjpeg&video_size=1280x720#video=h264#hardware"


    


    go2rtc-1  | 19:34:14.379 WRN [rtsp] error="streams: exec/rtsp\n[h264_v4l2m2m @ 0x7facadfb40] Encoder requires yuv420p pixel format.\n[vost#0:0/h264_v4l2m2m @ 0x7faf9aa3a0] Error while opening encoder - maybe incorrect parameters such as bit_rate, rate, width or height.\nError while filtering: Invalid argument\n[out#0/rtsp @ 0x7fafff7ec0] Nothing was written into output file, because at least one of its streams received no packets.\n" stream=grill


    


    I tried splitting it into steps to ingest the MJPEG, then convert the pixel format, then encode to h264, however this results in the same error. The _mjpeg feeds both work, but the final encoded feed has the same encoder error :

    


    streams:
    grill_mjpeg: "ffmpeg:device?video=/dev/video0&input_format=mjpeg&video_size=1920x1080"
    grill_mjpeg_yuv: exec:ffmpeg -i http://localhost:1984/api/stream.mjpeg?src=grill_mjpeg -pix_fmt yuv420p -c:v copy -rtsp_transport tcp -f rtsp {output}
    grill: ffmpeg:http://localhost:1984/api/stream.mjpeg?src=grill_mjpeg_yuv#video=h264#hardware


    


    go2rtc-1  | 19:39:07.871 WRN [rtsp] error="streams: exec/rtsp\n[h264_v4l2m2m @ 0x7f7f1aca70] Encoder requires yuv420p pixel format.\n[vost#0:0/h264_v4l2m2m @ 0x7f820f83b0] Error while opening encoder - maybe incorrect parameters such as bit_rate, rate, width or height.\nError while filtering: Invalid argument\n[out#0/rtsp @ 0x7f82745ec0] Nothing was written into output file, because at least one of its streams received no packets.\n" stream=grill


    


    If I change grill_mjpeg_yuv to "-c:v mjpeg" instead of copy, it pegs one of the CPU cores at 100% and this stream will not output anything.

    


    Can anyone offer any tips ?

    


    As a small side consideration, having an intermediate MJPEG feed available would be helpful for displaying the reversing camera on a monitor in the van with the lowest latency possible, however I want h264 streams for recording and viewing via the van's 4G connection.

    


  • How to add a fake microphone to android emulator running on linux headless

    2 juin, par Red

    Trying to add a microphone to the android emulator running on Linux headless. Host as no microphone so I need to create fake one and simulate playing some random music for my android test.

    


    In emulator command there is a option to use the host audio as input, by passing
-allow-host-audio but it is not working on the phone.

    


    How to create virtual microphone
Start the pusleaudio

    


    pulseaudio -D --exit-idel-time=-1


    


    Create the fake mic

    


    pactl load-module module-null-sink sink_name=FakeSink


    


    pactl load-module module-remap-source master=FakeSink.monitor source_name=FakeMic

    


    And set to default

    


    pactl sets default FakeSink.monitor


    


    Microphone test

    


    $ffmpeg -f pulse -i default out.wav


$ sox out.wav -n stat

Samples read:            579654

Length (seconds):      6.038062

Scaled by:         2147483647.0

Maximum amplitude:     0.533081

Minimum amplitude:    -0.585297

Midline amplitude:    -0.026108

Mean    norm:          0.067096

Mean    amplitude:     0.003363

RMS     amplitude:     0.093545

Maximum delta:         0.603760

Minimum delta:         0.000000

Mean    delta:         0.073738

RMS     delta:         0.105326

Rough   frequency:         8601

Volume adjustment:        1.709


    


    Run the emulator

    


    emulator -avd and_1 -allow-host-audio -no_window


    


    There was no audio on the phone.

    


  • 4K Screen Recording on 1080p Monitors [closed]

    10 avril, par Souhail Benlhachemi

    I have created a basic windows screen recording app (ffmpeg + GUI), but I noticed that the quality of the recording depends on the monitor used to record, the video recording quality when recorded using a full HD is different from he video recording quality when recorded using a 4k monitor (which is obvious).

    


    There is not much difference between the two when playing the recorded video with a scale of 100%, but when I zoom to 150% or more, we clearly can see the difference between the two recorded videos (1920x1080 VS the 4k).

    


    I did some research on how to do screen recording with a 4k quality on a full hd monitor, and here is what I found :

    


    I played with the windows duplicate API (AcquireNextFrame function which gives you the next frame on the swap chain), I successfully managed to convert the buffer to a PNG image and save it locally to my machine, but as you expect the quality was the same as a normal screenshot ! Because AcquireNextFrame return a frame after it is rasterized.

    


    Then I came across what’s called “Graphics pipeline”, I spent some time to understand the basics, and finally I came to a conclusion that I need to intercept somehow the pre-rasterize data (the data that comes before the Rasterizer Stage - Geometry shaders, etc...) and then duplicate this data and do an off-screen render on a new 4k render target, but the windows API don’t allow that, there is no way to do that ! The only option they have on docs is what’s called Stream Output Stage, but this is useful only if you want to render your own shaders, not the ones that my display is using. (I tried to use MinHook to intercept data but no luck).

    


    After that, I tried a different approach, I managed to create a virtual display as extended monitor with 4k resolution, and record it using ffmpeg, but as you know what I’m seeing on my main display on my monitor is different from the virtual display (only an empty desktop), what I need to do is drag and drop app windows using my mouse to that screen manually, but this will put us in a problem when recording, we are not seeing what we are recording xD.

    


    I found some YouTube videos that talk about DSR (Dynamic Super Resolution), I tried that on my nvidia control panel (manually with GUI) and it works. I managed to fake the system that I have a 4k monitor and the quality of the recording was crystal clear. But I didn’t find anyway to do that programmatically using NVAPI + there is no API for that on AMD.

    


    Has anyone worked on a similar project ? Or know a similar project that I can use as reference ?

    


    suggestions ?