Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (82)

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

Sur d’autres sites (9007)

  • ffmpeg audio and video gradually go out of sync when recorded with shell script and make

    6 août 2020, par cgmil

    I wrote the following shell script for making streamcasts for recording streams using ffmpeg :

    


    [ -z $1 ] && { echo "No recording device selected!" 2>&1 && exit ; } || DEV=$1
VIDFILE=video/bvid.mkv
AUDFILE=audio/baud.wav
DIM=$(xdpyinfo | grep dimensions: | awk '{print $2;}')
(sleep 10 && echo "Streaming...") &
ffmpeg -nostdin -loglevel panic -f x11grab -s $DIM -i $DISPLAY.0 -c:v libx264 "$VIDFILE" &
ffmpeg -nostdin -loglevel panic -f alsa -i hw:$DEV -c:a libmp3lame  "$AUDFILE" &
sleep 1
[ -f $VIDFILE ] && (echo "Created video" 1>&2) || (echo "No video!" 1>&2)
sleep 1
[ -f $AUDFILE ] && (echo "Created audio" 1>&2) || (echo "No audio!" 1>&2)

wait
trap "killall background -s 2" SIGINT


    


    This produces a video and audio file that can then be used to make a stream via make using the command make stream. Here is the Makefile (note that there is a 3-minute long audio file and a file with images that is used to make an opening splash screen) :

    


    FULLAUDIO=audio/full.wav
AUDSTART=84
TITLE="Stats Aside:\\nInterpreting\\nProbability"
SIZEW=1920
SIZEH=1080

audio/clip.wav : $(FULLAUDIO)
        ffmpeg -ss $(AUDSTART) -i $< -t 6 -af "afade=t=in:st=0:d=3:curve=qsin,afade=t=out:st=3:d=3:curve=qsin" -c:a libmp3lame $@

img/thumb.png : img/bg.png img/ico.png
        convert -stroke white -fill white -background black -transparent "#000000" \
                 -font "URWBookman-Light" -gravity Center -size 928x1080 \
                label:$(TITLE) png:- | composite -gravity west -geometry +64+0 - \
                img/bg.png $@
        convert img/ico.png -transparent "#ffffff" png:- | composite \
                 -gravity southeast -geometry +32+32 - $@ $@
        convert $@ -resize $(SIZEW)x$(SIZEH) $@

video/intro.mp4 : img/thumb.png audio/clip.wav
        ffmpeg -i audio/clip.wav -loop 1 -t 6 -i img/thumb.png -vf "fade=t=in:st=0:d=1,fade=t=out:st=5:d=1:c=white,scale=$(SIZEW):$(SIZEH)" -c:v libx264 -c:a libmp3lame $@

ProbabilityInterpretation.mp4 : video/bvid.mkv video/intro.mp4 audio/baud.wav
        ffmpeg -i video/intro.mp4 -ss 11 -i video/bvid.mkv \
                 -ss 10 -i audio/baud.wav \
                 -filter_complex \
                "[1]fade=t=in:st=0:d=1:c=white,scale=$(SIZEW):$(SIZEH)[bvid];\
                 [2]afftdn=nr=20[baud];\
                 [0:v][0:a][bvid][baud]concat=n=2:v=1:a=1[v][a]" -map "[v]" -map "[a]" \
                 -c:v libx264 -c:a libmp3lame $@

stream : ProbabilityInterpretation.mp4

clean:
        -rm ProbabilityInterpretation.mp4
        -rm video/intro.mp4
        -rm audio/clip.wav
        -rm img/thumb.png

veryclean:
        make clean
        -rm video/bvid.mkv
        -rm audio/baud.wav



    


    Because of lag between my webcam and what appears on screen, I do intentionally start the video and audio at different times so that they can eventually become in sync. However, the end result is a video that starts with the video and audio synced together, but eventually become out of sync, with the video lagging well behind the audio. Here is a link to the video with the end result ; it's long, but notice that the video starts in sync and ends out-of-sync. I should probably also note that the operating system is living in a virtual machine (VirtualBox).

    


    Why is this happening and what can I do to prevent or correct ?

    


  • Problem updating drawtext with file using ffmpeg [duplicate]

    17 novembre 2020, par drmobbins

    I have a shell script that runs to combine a 24/7 AAC audio stream with a 720p HD picture/background and streams that output live to YouTube. This is for an internet radio station. The script works perfectly except for the drawtext option. The drawtext option references a file that is updated every 15 seconds (using Python and Cron) with the correct song metadata (from the radio automation suite) and is supposed to print the contents of the metadata file to the screen. This happens one time when the ffmpeg command is run in the script, but doesn't update after a song change.

    


    I would assume that since the metadata file is changing every 15 seconds on the server that it would update the song details in the output video that could be seen on YouTube...but it doesn't.

    


    #!/bin/bash

#Quality settings
VBR="1500k"
FPS="30"
QUAL="ultrafast"
AUDIO_ENCODER="aac"

#Youtube settings
YOUTUBE_URL=" rtmp://a.rtmp.youtube.com/live2"
YOUTUBE_KEY="xxxx-xxxx-xxxx-xxxx-xxxx"

#Sources
VIDEO_SOURCE="bg720p.jpg"
AUDIO_SOURCE="http://stream.url"

#Metadata settings
TRACK_METADATA=$(cat metadata.txt)

ffmpeg \
 -loop 1 \
 -re \
 -framerate $FPS \
 -i "$VIDEO_SOURCE" \
 -thread_queue_size 512 \
 -i "$AUDIO_SOURCE" \
 -vf "drawtext=fontfile=OpenSans-Light.ttf:text=$TRACK_METADATA:x=10:y=680:fontsize=20:fontcolor=white" \
 -c:v libx264 -tune stillimage -pix_fmt yuv420p -preset $QUAL -r $FPS -g $(($FPS *2)) -b:v $VBR \
 -c:a $AUDIO_ENCODER -threads 6 -ar 44100 -b:a 64k -bufsize 512k -pix_fmt yuv420p \
 -f flv $YOUTUBE_URL/$YOUTUBE_KEY


    


    What am I missing that will make the output continually check for file changes and use drawtext to display the contents. Thanks in advance !

    


  • How to work with data from streaming services in my Java application ?

    24 novembre 2020, par gabriel garcia

    I'm currently trying to develop an "streaming client" as a way to organize multiple stream services (twitch, yt, mitele...) in a single desktop application written in Java.

    


    It basically relies on streamlink (which relies in ffmpeg) thanks to all it's features so my project could be defined as a frontend for streamlink.

    


    Straight to the point, one of the features I'd like to add it is the option to programatically record streams in the background and showing this video stream to the user when it's requested. Since there's also the possibility that the user wants to watch the stream without recording it, I'm forced to work with all that byte-like data sent from those streaming sources.

    


    So, the problem is basically that I do not know much about video coding/decoding/muxing/demuxing nor video theory like container structure, video formats and such.

    


    But the idea is to work with all the data sent from the stream source (let's say twitch, for example), read this bytes (I'm not sure what kind of information is sent to the client nor format) from the java.lang.Process's stdout and then present it to the client.

    


    Here's another problem : I don't know how to play video streams in JavaFX and I don't think it's even supported right now. So I would have to extract each frame and sound associated from the stdout and show them to the user each time a new frame is received (oups, another problem since I don't know when does each frame starts/ends since I'm reading each stdout's line).

    


    As a summary :

    


      

    • How can I know when does each frame starts/stops ?
    • 


    • How can I extract the image and sound from each frame ?
    • 


    


    I hope I'm not asking too much and that you could shed some light upon my darkness.