Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (50)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (9971)

  • FFmpeg screencast recording : which codecs to use ?

    24 avril 2013, par mkaito

    I've been experimenting with recording screencasts using FFmpeg's X11grab module, which has worked more or less fine so far. I understand that a/v encoding is a complex process with many fine details, but I'm doing my best to learn.

    I'd like to do "lightweight" recording of a video stream, that puts as little strain as possible on the system while the stream is being recorded. I record two audio streams separately with pacat and sox. Later, the whole thing is filtered, normalized, encoded, and combined into a Matroska container.

    Right now, I'm having ffmpeg record a rawvideo stream to be fed to x264's yuv4 demuxer. I experimented with ffv1 and straight x264 recording before. My system can't handle real time encoding with x264 on the settings I want for the final stream, so I have to recompress separately once the recording is done. I've found that ffv1 gives me terrible frame dropping, and yuv4 too, but less so. I suspect this is due to hard drive speed, even if I'm sitting in a SATA3 Caviar Black that's being used exclusively to hold the recorded data.

    The question is, which combination of video codecs should I look at ? Record straight in x264 and recompress to "better" x264 later ? Raw video, then compress ? How would I go about pinpointing issues such as the frame drops I've been experiencing ?

    EDIT : This is the ffmpeg line I currently use.

    ffmpeg -v warning -f x11grab -s 1920x1080 -r 30000/1001 -i :0.0\
    -vcodec rawvideo -pix_fmt yuv420p -s 1280x720\
    -threads 0\
    recvideo.y4m
  • ffmpeg SDP file for Darwin Streaming Server

    10 septembre 2012, par SP Sandhu

    I am making a streaming server to view live video feed of my webcam on my mobile device.

    I considered using ffmpeg , VLC and DSS and made the following setup that worked somewhat, though the frames were skipped :-

    video4linux2 > ffserver > VLC transcoding > DSS

    (RAW to ffserver) > (outputs to SDP link) > (SDP link to SDP file) > (SDP file to live streaming to mobile)

    Later, on testing VLC i found to be very inefficient and slow on my Netbook(Intel Atom N480) as it skips lot of frames.

    DSS can stream a SDP file from its /usr/local/movies(default).

    And at the same time, ffmpeg's ffserver module can stream live feed to SDP link(not SDP file).

    My requirement is that i need to create SDP file in DSS's /usr/local/movies directory so as to pass this DSS for streaming.

    So, how to create a sdp file from ffmpeg or how to create SDP file from SDP link (without using VLC's trans-coding).

    How to do that ?

  • How do I use ffmpeg with Python by passing File Objects (instead of locations to files on disk)

    1er mai 2012, par Lyle Pratt

    I'm trying to use ffmpeg with Python's subprocess module to convert some audio files. I grab the audio files from a URL and would like to just be able to pass the Python File Objects to ffmpeg, instead of first saving them to disk. It would also be very nice if I could just get back a file stream instead of having ffmpeg save the output to a file.

    For reference, this is what I'm doing now :

    tmp = "/dev/shm"
    audio_wav_file = requests.get(audio_url)
    ##              ##                         ##
    ## This is what I don't want to have to do ##
    wavfile = open(tmp+filename, 'wrb')  
    wavfile.write(audio_wav_file.content)
    wavfile.close()
    ##              ##                         ##
    conversion = subprocess.Popen('ffmpeg -i "'+tmp+filename+'" -y "'+tmp+filename_noext+'.flac" 2>&1', shell = True, stdout = subprocess.PIPE).stdout.read()

    Does anyone know how to do this ?

    Thanks !