Recherche avancée

Médias (1)

Mot : - Tags -/école

Autres articles (54)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Les formats acceptés

    28 janvier 2010, par

    Les commandes suivantes permettent d’avoir des informations sur les formats et codecs gérés par l’installation local de ffmpeg :
    ffmpeg -codecs ffmpeg -formats
    Les format videos acceptés en entrée
    Cette liste est non exhaustive, elle met en exergue les principaux formats utilisés : h264 : H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 m4v : raw MPEG-4 video format flv : Flash Video (FLV) / Sorenson Spark / Sorenson H.263 Theora wmv :
    Les formats vidéos de sortie possibles
    Dans un premier temps on (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (6410)

  • How can I send a virtual camera to Genymotion or Android Studio Emulator in Ubuntu ?

    4 décembre 2020, par ykasur

    I created a virtual camera using v4l2loopback and ffmpeg. The command I use for ffmpeg is :
ffmpeg -re -l oop 1 -i vin.png -vf format=yuv420p -f v4l2 /dev/video2

    


    vin.png is the image I want to stream to the webcam and /dev/video2 is the virtual webcam I created with v4l2loopback.
The virtual webcam works and I can see it e.g. with onlinemicetest.com/webcam-test.
I'm using the Genymotion emulator with the newest Android API (I tried 7.0, 8.1 and 10.0) on Ubuntu 20.40.
Genymotion detects the virtual camera but only displays a dummy image :
Wrong dummy Image from Genymotion
I also tried (and would prefer to use) the android studio emulator. But I can only select Webcam0 in the configuration of the device camera and that points to the real integrated camera and not to my virtual webcam.

    


    I don't need to use ffmpeg, but I do need to use a tool that lets me control which image to stream from the command line.

    


    Is there a way to solve this ? Many thanks in advance !

    


    Update 17.11.2020 :
The Genymotion support answered me, that they plan to support virtual cameras in the future. They might be ready to add this in mid 2021.

    


  • ffmpeg filter complex error ( burning subtitles used overlay filter)

    8 septembre 2020, par jgkim0518

    I try to burn dvb subtitles, based image, on video used ffmpeg overlay filter. but I failed because wrong using filter complex.

    


    It's my command line.

    


    ./ffmpeg -y -hwaccel cuda -hwaccel_output_format cuda -hwaccel_device 0 \
-i input.ts \
-filter_complex "[v:0][s:3]overlay[overlay];[overlay]hwupload_cuda[base];[base]scale_npp=1920:1080[v1];[base]scale_npp=1920:1080[v2];[base]scale_npp=1280:720[v3];[base]scale_npp=720:480[v4];[base]scale_npp=480:360[v5]" \
-map "[v1]" -map 0:a -c:v hevc_nvenc -b:v 6000000 -maxrate 7000000 -bufsize 12000000 -g 15 -c:a libfdk_aac -ar 48000 -ac 2 -pkt_size 128000 -f mpegts test_1.ts \
-map "[v2]" -map 0:a -c:v h264_nvenc -an -b:v 4000000 -maxrate 5000000 -bufsize 8000000 -g 15 -f mpegts test_2.ts \
-map "[v3]" -map 0:a -c:v h264_nvenc -an -b:v 2500000 -maxrate 3500000 -bufsize 5000000 -g 15 -f mpegts test_3.ts \
-map "[v4]" -map 0:a -c:v h264_nvenc -an -b:v 1500000 -maxrate 2500000 -bufsize 3000000 -g 15 -f mpegts test_4.ts \
-map "[v5]" -map 0:a -c:v h264_nvenc -an -b:v 800000 -maxrate 1800000 -bufsize 2000000 -g 15 -f mpegts test_5.ts


    


    but I failed. It is error messages.

    


    Input #0, mpegts, from 'input.ts':
Duration: N/A, start: 22881.964411, bitrate: N/A
  Program 1 
    Metadata:
      service_name    : Service01
      service_provider: FFmpeg
    Stream #0:0[0x100](eng): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, 5.1(side), fltp, 384 kb/s
    Stream #0:1[0x101](ind): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, stereo, fltp, 192 kb/s
    Stream #0:2[0x102](zho): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, stereo, fltp, 192 kb/s
    Stream #0:3[0x103](kho): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, stereo, fltp, 192 kb/s
    Stream #0:4[0x104]: Video: h264 (High), 1 reference frame ([27][0][0][0] / 0x001B), yuv420p(top first, left), 1920x1080 (1920x1088) [SAR 1:1 DAR 16:9], 25 fps, 50 tbr, 90k tbn, 50 tbc
    Stream #0:5[0x105](CHI): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
    Stream #0:6[0x106](CHS): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
    Stream #0:7[0x107](IND): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
    Stream #0:8[0x108](THA): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
    Stream #0:9[0x109](MAN): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
    Stream #0:10[0x10a](MON): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
    Stream #0:11[0x10b](BUR): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
    Stream #0:12[0x10c](ENG): Subtitle: dvb_subtitle ([6][0][0][0] / 0x0006)
    [mpegts @ 0x47cbd00] Invalid stream specifier: base.
    Last message repeated 17 times
    Stream specifier 'base' in filtergraph description [v:0][s:3]overlay[overlay];[overlay]hwupload_cuda[base];[base]scale_npp=1920:1080[v1];[base]scale_npp=1920:1080[v2];[base]scale_npp=1280:720[v3];[base]scale_npp=720:480[v4];[base]scale_npp=480:360[v5] matches no streams.


    


    My plan is this.
enter image description here

    


    How to do burn subtitle on video using filter complex, ffmpeg from this structure ?

    


  • Is it possible to send a temporary slate (image or video) into a running Azure Live Event RTMP-stream ?

    15 novembre 2020, par Brian Frisch

    I'm currently building a video streaming app which leverages Azure Media Services Live Events.

    


    It consists of :

    


      

    1. a mobile app that can stream live video and.
    2. 


    3. a web client that plays the live event video.
    4. 


    5. a producer screen with controls to start and stop the web client access to the video.
    6. 


    7. a server that handles various operations around the entire system
    8. 


    


    It's working very well, but I would like to add a feature that would enable the producer to add some elegance to the experience. Therefore I'm trying to get my head around how I can enable the producer be able to switch the incoming source of the stream to a pre-recorded video or event a still image at any point during the recording, and also to switch back to live-video. A kill-switch of some kind, that would cover waiting-time if there's technical difficulties on the set, and it could also be used for pre-/post-roll branding slates when introing and outroing a video event. I would like this source switch to be embedded in the video stream (also so it would be possible to get this into the final video-product if I need it in an archive for later playback)

    


    I'm trying to do it in a way where the producer can set a timestamp for when the video override should come in, and when it should stop. The I want to have my server respond to these timestamps and send the instructions over RTMP to the Azure Live Event. Is it possible to send such an instruction ("Hey, play this video-bit/show this image in the stream for x-seconds") in the RTMP-protocol ? I've tried to figure it out, and I've read about SCTE-35 markers and such, but I have not been able to find any examples on how to do it, so I'm a bit stuck.

    


    My plan-B is to make it possible to stream an image from the mobile application that already handles the live video-stream, but I'm initially targeting an architecture where the mobile app is unaware of anything else than live streaming, and this override switch should preferably be handled by the server, which is a firebase functions setup.

    


    If you are able to see other ways of doing it, I'm all ears.

    


    I've already tried to build a ffmpeg method that listens to updates to the producer-set state, and then streams an image to the same RTMP-url that the video goes to from the mobile app. But it only works when the live video isn't already streaming - it seems like I cannot take over a RTMP-stream when it's already running.