Recherche avancée

Médias (1)

Mot : - Tags -/biomaping

Autres articles (82)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

Sur d’autres sites (13823)

  • gstreamer records

    13 octobre 2012, par Eric

    I use the following command to record audio and video from my webcam

    gst-launch-0.10 v4l2src ! video/x-raw-yuv,width=640,height=480,framerate=30/1 ! \
                tee name=t_vid ! queue ! videoflip method=horizontal-flip ! \
                xvimagesink sync=false t_vid. ! queue ! \
                videorate ! video/x-raw-yuv,framerate=30/1 ! queue ! mux. \
                autoaudiosrc ! audiorate ! audio/x-raw-int,rate=48000,channels=1,depth=16 ! queue ! \
                audioconvert ! queue ! mux. avimux name=mux ! \
                filesink location=video.avi

    And the result is correct in term of synchronicity between the flows. However the avi file is very big since that's uncompressed data...
    Could you advice me howto reduce the size of the records. Note that I after recording I split audio and video in separate files for processing. It is crucial to keep the synchronicity.

    * Edit *

    I tried to use ffmpeg to compress the avi files using this command :

    ffmpeg -i video.avi -vcodec msmpeg4v2 output.avi

    But it seems that bitrate is invalid (N/A since its raw data ?)
    Here is the output :

    Input #0, avi, from 'video.avi':
    Duration: 00:00:00.00, start: 0.000000, bitrate: -2147483 kb/s
     Stream #0.0: Video: rawvideo, yuv420p, 640x480, 30 tbr, 30 tbn, 30 tbc
     Stream #0.1: Audio: pcm_s16le, 48000 Hz, 1 channels, s16, 768 kb/s
    [buffer @ 0xef57e0] w:640 h:480 pixfmt:yuv420p
    Incompatible sample format 's16' for codec 'ac3', auto-selecting format 'flt'
    [ac3 @ 0xedece0] channel_layout not specified
    [ac3 @ 0xedece0] No channel layout specified. The encoder will guess the layout, but it     might be incorrect.
    [ac3 @ 0xedece0] invalid bit rate
    Output #0, avi, to 'output.avi':
     Stream #0.0: Video: msmpeg4v2, yuv420p, 640x480, q=2-31, 200 kb/s, 90k tbn, 30 tbc
     Stream #0.1: Audio: ac3, 48000 Hz, mono, flt, 200 kb/s
    Stream mapping:
     Stream #0.0 -> #0.0
     Stream #0.1 -> #0.1
    Error while opening encoder for output stream #0.1 - maybe incorrect parameters such as bit_rate, rate, width or height

    Thanks for helping.

  • ffmpeg does not draw text

    4 septembre 2016, par Michael Heuberger

    hope one of you can tell me why this ffmpeg command of mine does not draw the desired text. the produced video doesn’t have it. here you go :

    ffmpeg -f image2 -thread_queue_size 64 -framerate 15.1 -i /home/michael-heuberger/binarykitchen/code/videomail.io/var/local/tmp/clients/videomail.io/11e6-723f-d0aa0bd0-aa9b-f7da27da678f/frames/%d.webp -y -an -vcodec libvpx -filter:v drawtext=fontfile=/home/michael-heuberger/binarykitchen/code/videomail.io/src/assets/fonts/Varela-Regular.ttf:text=www.videomail.io:fontsize=180:fontcolor=white:x=150:y=150:shadowcolor=black:shadowx=2:shadowy=2 -vf scale=trunc(iw/2)*2:trunc(ih/2)*2 -crf 12 -deadline realtime -cpu-used 4 -pix_fmt yuv420p -loglevel warning -movflags +faststart /home/michael-heuberger/binarykitchen/code/videomail.io/var/local/tmp/clients/videomail.io/11e6-723f-d0aa0bd0-aa9b-f7da27da678f/videomail_preview.webm

    the crucial part is this video filter :

    -filter:v drawtext=fontfile=/home/michael-heuberger/binarykitchen/code/videomail.io/src/assets/fonts/Varela-Regular.ttf:text=www.videomail.io:fontsize=180:fontcolor=white:x=150:y=150:shadowcolor=black:shadowx=2:shadowy=2

    does it seem correct to you ? if so, then why am i not seeing any text in the videomail_preview.webm video file ?

    using ffmpeg v2.8.6 here with —enable-libfreetype, —enable-libfontconfig and —enable-libfribidi enabled.

    furthermore, the above command has been produced with fluent-ffmpeg.

    so, any ideas ?

  • FFmpeg, videotoolbox and avplayer in iOS

    9 janvier 2017, par Hwangho Kim

    I have a question how these things are connected and what they exactly do.

    FYI, I have a few experience about video player and encoding and decoding.

    In my job I deal udp streaming from server and take it with ffmpeg and decodes it and draw it with openGL. And also using ffmpeg for video player.

    These are the questions...

    1. Only ffmpeg can decodes UDP streaming (encoded with ffmpeg from the server) or not ?

    I found some useful information about videotoolbox which can decode streaming with hardware acceleration in iOS. so could I also decode the streaming from the server with videotoolbox ?

    2. If it is possible to decode with videotoolbox (I mean if the videotoolbox could be the replacement for ffmpeg), then what is the videotoolbox source code in ffmpeg ? why it is there ?

    In my decoder I make AVCodecContext from the streaming and it has hwaccel and hwaccel_context field which set null both of them. I thought this videotoolbox is kind of API which can help ffmpeg to use hwaccel of iOS. But it looks not true for now...

    3. If videotoolbox can decode streaming, Does this also decode for H264 in local ? or only streaming possible ?

    AVPlayer is a good tool to play a video but if videotoolbox could replace this AVPlayer then, what’s the benefit ? or impossible ?

    4. FFmpeg only uses CPU for decoding (software decoder) or hwaccel also ?

    When I play a video with ffmpeg player, CPU usage over 100% and Does it means this ffmpeg uses only software decoder ? or there is a way to use hwaccel ?

    Please understand my poor english and any answer would be appreciated.

    Thanks.