Recherche avancée

Médias (1)

Mot : - Tags -/ogv

Autres articles (99)

  • MediaSPIP 0.1 Beta version

    25 avril 2011, par

    MediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • Multilang : améliorer l’interface pour les blocs multilingues

    18 février 2011, par

    Multilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
    Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela.

  • L’agrémenter visuellement

    10 avril 2011

    MediaSPIP est basé sur un système de thèmes et de squelettes. Les squelettes définissent le placement des informations dans la page, définissant un usage spécifique de la plateforme, et les thèmes l’habillage graphique général.
    Chacun peut proposer un nouveau thème graphique ou un squelette et le mettre à disposition de la communauté.

Sur d’autres sites (13375)

  • output of ffmpeg comes out like yamborghini high music video

    19 janvier, par chip

    I do this procedure when I edit a long video

    


      

    • segment to 3 second videos, so I come up with a lot of short videos
    • 


    • I randomly pick videos and put them in a list
    • 


    • then I join these short videos together using concat
    • 


    • now I get a long video again. next thing I do is segment the video 4 minute videos
    • 


    


    After processing, the videos look messed up. I don't know how to describe it but it looks like the music video yamborghini high

    


    For some reason, this only happens to videos I capture at night. I do the same process for day time footage, no problem.

    


    is there a problem with slicing, merging and then slicing again ?

    


    or is it an issue that I run multiple ffmpeg scripts at the same time ?

    


    here's the script

    


    for FILE in *.mp4; do ffmpeg -i ${FILE} -vcodec copy -f segment -segment_time 00:10 -reset_timestamps 1 "part_$( date '+%F%H%M%S' )_%02d.mp4"; rm -rf $FILE; done; echo 'slicing completed.' && \ 
for f in part_*[13579].mp4; do echo "file '$f'" >> mylist.txt; done
ffmpeg -f concat -safe 0 -i mylist.txt -c copy output.mp4 && echo 'done merging.' && \ 
ffmpeg -i output.mp4 -threads 7 -vcodec copy -f segment -segment_time 04:00 -reset_timestamps 1 "Video_Title_$( date '+%F%H%M%S' ).mp4" && echo 'individual videos created'




    


  • Merging input Streams with nodejs/ffmpeg

    14 septembre 2020, par jAndy

    I'm creating a very basic and rudimentary Video-Web-Chat. On the client side, I'm going to use a simple getUserMedia API call to capture the webcam data and send video-data as data-blob to my server.

    


    From There, I'm planning to either use the fluent-ffmpeg library or just spawn ffmpeg myself and pipe that raw data to ffmpeg, which in turn, does some magic and pushes that out as HLS stream to an Amazon AWS Service (for instance), which then gets actually displayed on a Web Browser for all participating people in the video chat.

    


    So far, I think all of this should be fairly easy to implement, but I keep my head spinning around the question, how I can create a "combined" or "merged" frame and stream, so the output HLS data from my server to the distributing cloud service has only to be one combined data stream to receive.

    


    If there are 3 people in that video chat, my server receives 3 data streams from those clients and combines these data streams (from the individual web-cam data sources) into one output stream.

    


    How could that be accomplished ?
Can I "create" a new frame with ffmpeg, so to speak ? I would be very thankful if anybody could give me a heads up here, maybe I'm thinking in a complete wrong direction.

    


    Another question which arises to me is, if I really can just "dump" any data, which I'm receiving from a binary blob created from getUserMedia or MultiStreamRecorder to ffmpeg or if I have to specify somewhere and somehow the exact codecs being used etc.?

    


  • avcodec/dvdsubdec, dvbsubdec : remove bitmap dumping in DEBUG builds

    28 mai 2022, par softworkz
    avcodec/dvdsubdec, dvbsubdec : remove bitmap dumping in DEBUG builds
    

    It's been a regular annoyance and often undesired.
    There will be a subtitle filter which allows to dump individual
    subtitle bitmaps.

    Signed-off-by : softworkz <softworkz@hotmail.com>
    Signed-off-by : Marton Balint <cus@passwd.hu>

    • [DH] libavcodec/dvbsubdec.c
    • [DH] libavcodec/dvdsubdec.c