Recherche avancée

Médias (5)

Mot : - Tags -/open film making

Autres articles (107)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (11446)

  • iOS local host RTMP server

    16 septembre 2020, par estoril

    I have been tasked with creating an iPad application, iOS 12.3 latest support, to run as a local RTMP server. Purpose being, external devices connect to this local RTMP server on iOS iPad, either via the iPad's wifi connection or enabled hotspot, and playback the display of external devices (mirror) onto the iPad.

    


    I have been successful with creating a local host HTTP server, but upon connecting with external devices the local server on the iPad quits.

    


    I will be using open source software for decoding and playback of video feed such as vlc or ffmpeg, but need guidance on an iPhone local host implementation, such as https://github.com/sallar/mac-local-rtmp-server for a mac.

    


    Any help would be greatly appreciated !

    


  • ffmpeg live webcam colorkey subtraction and dispaly feed on desktop (with subtracted color transparent/desktop see through visible))

    26 mai 2018, par Vij

    Purpose : I want to create instructional video lectures using laptop webcam and presentation slides. Here I should be visible in bottom right corner of desktop in small screen or full screen explaining slides. (like TV weather report).

    What I seek : Is there any way to apply colorkey to live webcam video to subtract background (greenscreen) so that desktop is visible through the top webcam borderless video window.(Then record everything on desktop)

    What I have done : I have been successful in overlaying colorkeyed live webcam video and X11grab :0.0 and saving the output in a video file.

       ffmpeg -f x11grab -thread_queue_size 64 -video_size 1024X600 -framerate 30 -i :0.0 -f  v4l2 -thread_queue_size 64 -video_size 320X180 -framerate 30 -i /dev/video0 -filter_complex '[1:v]colorkey=0x000000:0.1:0[ckout];[0:v][ckout] overlay=main_w-overlay_w:main_h-overlay_h:format=yuv444' -vcodec libx264 -preset ultrafast -qp 0 -pix_fmt yuv444p video.mp4

    But this is not I want. Because this way I cannot see what actually is happening on desktop and where should I point on the slide (lack of instructional control).

    I also successfully piped this composite output through ffplay - but it creates a mirror in mirror effect so thus useless.

    What I expect : I just want to apply ffmpge colorkey to the webcam feed /dev/video0 and display color subtracted output on desktop so that the subtracted region in the video player (ffplay/mplayer) should appear transparent and desktop should be visible (video player should preserve alpha channel and appear transparent in colorkeyed region). (weatherman effect).

    Roughly I am looking for
    ffmpeg -i /dev/video0 colorkry[ckout] -| ffplay -i - or - | mplayer -

    Note : I know openbroadcaster can do this job, I tried to install it but it does not execute citing "Failed to initialize video. Your GPU may not be supported, or your graphics drivers may need to be updated." I have a old laptop 2GB RAM and Atom processor running Xubuntu 16.04. probably openbroadcaster cant support.

    As I have successfully oberlayed colorkeyed webcam feed with X11grab (with maximum 50% cpu usage) I think it is easily possible to do live webcam colerkey subtraction with available resources.

    Please give suggestions.

  • mpv / ffplay strungling with —lavfi-complex and -vf

    26 novembre 2017, par Marcuzzz

    This is what I’m trying to accomplish in mpv.
    https://user-images.githubusercontent.com/7437046/33134083-130c2cc0-cf9f-11e7-8f8d-237297dc9c93.png

    It currently works with ffplay.
    The code for it looks like this :

    LAVFI = ("movie='$MOVIE':streams=dv+da [video][audio]; " +
       "[video]scale=512:-1, split=3[video1][video2][video3];" +
       "[audio]asplit=[audio1][audio2]; " +
       "[video1]format=nv12,waveform=graticule=green:mode=column:display=overlay:" +
       "mirror=1:components=7:envelope=instant:intensity=0.2, " +
       "scale=w=512:h=512:flags=neighbor, " +
       "pad=w=812:h=812:color=gray [scopeout]; " +
       "[video2]scale=512:-1:flags=neighbor[monitorout]; " +
       "[audio1]ebur128=video=1:meter=18:framelog=verbose:peak=true[ebur128out][out1]; " +
       "[ebur128out]scale=300:300:flags=fast_bilinear[ebur128scaledout]; " +
       "[scopeout][ebur128scaledout]overlay=x=512:eval=init[videoandebu]; " +
       "[audio2]avectorscope=s=301x301:r=10:zoom=5, " +
       "drawgrid=x=149:y=149:t=2:color=green [vector]; " +
       "[videoandebu][monitorout]overlay=y=512:eval=init[comp3]; " +
       "[comp3][vector]overlay=x=512:y=300:eval=init, " +
       "setdar=1/1, setsar=1/1, " +
       "drawtext=fontfile='$FONT':timecode='$TIMECODE':" +
       "r=$FPS:x=726:y=0:fontcolor=white[comp]; " +
       "[video3]format=nv12,vectorscope=mode=color3, " +
       "scale=1:1[vectorout]; "+
       "[comp][vectorout]overlay=x=512:y=600:eval=init[out0]")  

    I found it on Github :
    https://github.com/Warblefly/FFmpeg-Scope

    After experimenting a lot. I a came to understand it a bit...
    This code works with python and mpv.

    LAVFI = "[aid1]asplit=3[audio1][audio2][audio3];" + \
       "[audio1]avectorscope=s=640x640[audioscope];" +  \
       "[audio2]ebur128=video=1:meter=18[ebu][ao];" + \
       "[audio3]showvolume[showv];" + \
       "[vid1]scale=640:-1, split=4[video1][video2][video3][video4];" + \
       "[video1]format=nv12[comp];" + \
       "[video2]hflip[comp2];" + \
       "[video3]format=nv12,[comp3]; " + \
       "[comp][audioscope]overlay=y=-160[a];" + \
       "[comp2][ebu]overlay[b];" + \
       "[comp3][showv]overlay[c];" + \
       "[video4][a]hstack=inputs=2[top]; " + \
       "[b][c]hstack=inputs=2[bottom]; " + \
       "[top][bottom]vstack=inputs=2[vo]"  

    dos_command = [MPV + 'mpv','--lavfi-complex',LAVFI,filename_raw]
    subprocess.check_output(dos_command)

    But if I change [video3]format=nv12,[comp3] to [video3]waveform[comp3] or
    [video3]vectorscope[comp3] it doesn’t work. If I change it to
    [video3]negate[comp3] it works...

    So to explain the above code a bit :
    First we split the the incoming audio 3 times.
    Then we pass it to 3 measuring outputs... avectorscope, ebur128, showvolume.
    I don’t understand ebur128=.....[ebu][ao] the "[ao]" bit...
    Then we focus on the video, we split it 4 times.
    I use "format", I don’t know exactly why...
    Then we start overlay and stacking the "labels".