Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (45)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (5842)

  • How to parallelize this for loop for rapidly converting YUV422 to RGB888 ?

    16 avril 2015, par vineet

    I am using v4l2 api to grab images from a Microsoft Lifecam and then transferring these images over TCP to a remote computer. I am also encoding the video frames into a MPEG2VIDEO using ffmpeg API. These recorded videos play too fast which is probably because not enough frames have been captured and due to incorrect FPS settings.

    The following is the code which converts a YUV422 source to a RGB888 image. This code fragment is the bottleneck in my code as it takes nearly 100 - 150 ms to execute which means I can’t log more than 6 - 10 FPS at 1280 x 720 resolution. The CPU usage is 100% as well.

    for (int line = 0; line < image_height; line++) {
       for (int column = 0; column < image_width; column++) {
           *dst++ = CLAMP((double)*py + 1.402*((double)*pv - 128.0));                                                  // R - first byte          
           *dst++ = CLAMP((double)*py - 0.344*((double)*pu - 128.0) - 0.714*((double)*pv - 128.0));    // G - next byte
           *dst++ = CLAMP((double)*py + 1.772*((double)*pu - 128.0));                                                            // B - next byte

           vid_frame->data[0][line * frame->linesize[0] + column] = *py;

           // increment py, pu, pv here

       }

    ’dst’ is then compressed as jpeg and sent over TCP and ’vid_frame’ is saved to the disk.

    How can I make this code fragment faster so that I can get atleast 30 FPS at 1280x720 resolution as compared to the present 5-6 FPS ?

    I’ve tried parallelizing the for loop across three threads using p_thread, processing one third of the rows in each thread.

    for (int line = 0; line < image_height/3; line++) // thread 1
    for (int line = image_height/3; line < 2*image_height/3; line++) // thread 2
    for (int line = 2*image_height/3; line < image_height; line++) // thread 3

    This gave me only a minor improvement of 20-30 milliseconds per frame.
    What would be the best way to parallelize such loops ? Can I use GPU computing or something like OpenMP ? Say spwaning some 100 threads to do the calculations ?

    I also noticed higher frame rates with my laptop webcam as compared to the Microsoft USB Lifecam.

    Here are other details :

    • Ubuntu 12.04, ffmpeg 2.6
    • AMG-A8 quad core processor with 6GB RAM
    • Encoder settings :
      • codec : AV_CODEC_ID_MPEG2VIDEO
      • bitrate : 4000000
      • time_base : (AVRational)1, 20
      • pix_fmt : AV_PIX_FMT_YUV420P
      • gop : 10
      • max_b_frames : 1
  • Play UDP live video stream in UWP

    19 avril 2018, par Nicolas Séveno

    I need to display a live video stream in a UWP application.

    The video stream comes from a GoPro. It is transported by UDP messages. I think it is a MPEG-2 TS stream.

    I can play it successfully using FFPlay with the following command line :

    ffplay -fflags nobuffer -f:v mpegts udp://:8554

    I would like to play it with MediaPlayerElement without using a third party library.

    According to the following page :
    https://docs.microsoft.com/en-us/windows/uwp/audio-video-camera/supported-codecs
    UWP should be able to play it. (I installed the "MPEG 2 video extension" in the Windows Store).

    I tried using DatagramSocket and the MessageReceived event to receive the UDP packets, it works without problem :

    _datagramSocket = new DatagramSocket();
    _datagramSocket.MessageReceived += (s, args) =>
    {
       Debug.WriteLine("message received");
    };
    await _datagramSocket.BindServiceNameAsync(8554);

    Then I create a MseStreamSource :

    _mseStreamSource = new MseStreamSource();
    _mseStreamSource.Opened += (_, __) =>
    {
       _mseSourceBuffer = _mseStreamSource.AddSourceBuffer("video/mp2t");
    };
    this.MediaSource = MediaSource.CreateFromMseStreamSource(_mseStreamSource);

    And in the DatagramSocket.MessageReceived event I send the messages to the MseStreamSource :

    using (IInputStream stream = args.GetDataStream())
    {
       _mseSourceBuffer.AppendStream(stream);
    }

    The AppendStream method fails with error HRESULT 0x8070000B for some packets.
    If I catch the error, the MediaPlayerElement displays the message "video not supported or incorrect file name". (not sure of the message, my Windows is in French).

    Is the MseStreamSource the correct way to display the stream ? Is there a better solution ?

  • Compiling FFMPEG on CentOS DigitalOcean

    29 juillet 2015, par coder_uk

    I set up a DigitalOcean instance running CentOS 6.5 and successfully followed the guide to compile FFMPEG (https://trac.ffmpeg.org/wiki/CompilationGuide/Centos). Hurrah !

    But of course I realised that by default, DigitalOcean creates a root user and so ffmpeg now lives in /root/bin/ffmpeg. Which isn’t ideal because when I want to exec the ffmpeg bin from nginx, I would have to run nginx as root for it to have permission.

    Questions ...

    1) Long-shot, but presumably if I change the owner of the ffmpeg binary to nginx, it still won’t work, because nginx won’t be able to access the /root folder it is in. Correct ?

    2) I could run nginx as root (’user root’). But this seems like a very bad idea. Correct ?

    3) Which leaves me with the option of creating a new user, and then compiling ffmpeg into its home folder. But : which user ? EC2 creates ’ec2-user’, so should I make my own equivalent for DO ? But then won’t I have to run nginx as that user, else I’ll run into the same problem ?

    Or should I compile ffmpeg into the ’nginx’ home folder, if indeed it has one ? Is that how it is supposed to be done ?

    Since compiling ffmpeg takes ages, I don’t want to keep doing it, and the static files all seem very out of date. Thanks