Recherche avancée

Médias (91)

Autres articles (97)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Installation en mode ferme

    4 février 2011, par

    Le mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
    C’est la méthode que nous utilisons sur cette même plateforme.
    L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
    Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...)

  • Installation en mode standalone

    4 février 2011, par

    L’installation de la distribution MediaSPIP se fait en plusieurs étapes : la récupération des fichiers nécessaires. À ce moment là deux méthodes sont possibles : en installant l’archive ZIP contenant l’ensemble de la distribution ; via SVN en récupérant les sources de chaque modules séparément ; la préconfiguration ; l’installation définitive ;
    [mediaspip_zip]Installation de l’archive ZIP de MediaSPIP
    Ce mode d’installation est la méthode la plus simple afin d’installer l’ensemble de la distribution (...)

Sur d’autres sites (7104)

  • speeding up x264 encoding (C++ code with libavcodec)

    20 décembre 2012, par Hrishikesh_Pardeshi

    I am trying to capture windows screen (continuous screen shots) and encode them into x264. For that I am using avcodec_encode_video2 function available with libavcodec. However, it takes a huge amount of time. The time fluctuates between 25 – 1800 milliseconds for encoding individual frames.

    I tried tried both 1080p and 720p with video recording on screen.

    These are the settings I am using. This was tested on Windows 7, win32 release build with 4 GB of RAM.

    bit_rate = 2000, width = 1920, height = 1080
    qmin = 0, qmax = 0, max_b_frames = 0, frame_rate = 25, pixel_format = YUV 4:4:4.
    The remaining settings are default which are fetched using avcodec_get_context_defaults3().

    Sample data(in milliseconds) for 20 frames (consecutive and chosen randomly) in a set of 250 frames.
    121, 106, 289, 126, 211, 30, 181, 58, 213, 34, 245, 50, 56, 364, 247, 171, 254, 83, 82, 229

    For the application it is a must that it captures at least at 15 fps. Can someone help out to tell whether any options can be used to improve the frame rate. I need to encode lossless, but I am open to some file size increase.

    Thanks in advance.

  • FFmpeg : Pipe segments to s3

    24 septembre 2024, par Brendan Kennedy

    I'd like to pipe ffmpeg segments to s3 without writing them to disk.

    


    ffmpeg -i t2.mp4 -map 0 -c copy -f segment -segment_time 20 output_%04d.mkv


    


    Is it possible to modify this command so that ffmpeg writes segments to an s3 bucket ? Something like this perhaps ?

    


    ffmpeg -i t2.mp4 -map 0 -c copy -f segment -segment_time 20 pipe:1 \
  | aws s3 cp - s3://bucket/output_%04d.mkv


    


    When I run the command above I receive this error

    


    Could not write header for output file #0
(incorrect codec parameters ?): Muxer not found


    


    This command works except the entire video is uploaded and not the individual segments

    


    ffmpeg -i t2.mp4 -map 0 -c copy -f segment -segment_time 20 pipe:output_%04d.mkv \
| aws s3 cp - s3://bucket/test.mkv


    


  • Real time compression/encoding using ffmpeg in objective c

    20 février 2014, par halfwaythru

    I have a small application written in Objective-c that looks for the video devices on the machine and allows the user to record video. I need to be able to compress this video stream in real time. I do not want to save the whole video, I want to compress it as much as possible and only write out the compressed version.

    I also don't want to use the AVFoundation's build in compression methods and need to use a third party library like ffmpeg.

    So far, I have been able to record the video and get individual frames using 'AVCaptureVideoDataOutputSampleBufferDelegate' in this method :

    - (void)captureOutput:(AVCaptureOutput *)captureOutput
      didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
      fromConnection:(AVCaptureConnection *)connection

    So I have a stream of images basically, and I want to throw them into ffmpeg (which is all set up on my machine). Do I need to call a terminal command to do this ? And if I do, how do I use the image stack as my input to the ffmpeg command, instead of the video. Also, how do I combine all the little videos in the end ?

    Any help is appreciated. Thanks !