Recherche avancée

Médias (1)

Mot : - Tags -/lev manovitch

Autres articles (71)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (11869)

  • Anomalie #4543 (Nouveau) : Accessibilité des chargements ajax (live regions)

    2 septembre 2020, par nicod _

    Suite à un audit de site par Temesis, il remonte que les attributs aria qu’on utilise sont mal placés.

    Je cite :

    Il semble qu’il y a un problème plus global avec l’utilisation des live region :
    Je rencontre à nouveau le div englobant div.ariaformprop qui possèdent les attributs

    aria-live
    aria-atomic
    aria-relevant

    Cette utilisation de ces attributs est erronée :

    elle ne donne pas le résultat probablement attendu
    le support d’aria-relevant dans les assistances technologiques n’est pas assuré

    Il est donc nécessaire de ne pas utiliser de live region sur une div englobante comme là.
    Il faut supprimer ces attributs et utiliser aria-live et aria-atomic avec parcimonie.

    J’ai regardé l’exemple que tu donnes sur spip.net également.

    L’exemple était : https://www.spip.net/spip.php?page=recherche&recherche=plugin&debut_articles=0#pagination_articles

    Cette prolifération d’aria-live + aria-atomic part probablement d’un bon sentiment, mais comme parfois avec l’accessibilité, le mieux est l’ennemi du bien.

    En positionnant ces attributs, avec ces valeurs, sur des div englobantes, cela génère dans les lecteurs d’écran la lecture automatique des contenus qui ont été mis à jour.

    Certes la lecture peut être interrompue par l’utilisateur, mais cela revient, à mon avis à imposer quelque chose qui n’a pas été forcément souhaité. De cette manière le propriétaire du site impose une expérience différente aux utilisateurs de lecteur d’écran.
    Ensuite, il faut bien entendu prendre en compte le contexte. Il pourrait arriver que ce comportement soit pertinent. Ce n’est pas le cas ici.

    Dans le cas de rechargement ajax, il faut étudier chaque cas et son contexte, mais le plus souvent le principe est :

    Ne pas utiliser ces attributs lorsqu’il n’y a pas d’interactions / rechargement ajax . Exemple sur un formulaire, cela n’a pas de sens : on rempli un form et puis on valide le form. A priori (si j’ai bien compris le fonctionnement) il n ’y a aucune raison d’utiliser de live region. En théorie, la présence de ces attributs s’il n’y a pas de rechargement ne devrait pas gêner. Mais certains lecteurs d’écran peuvent prendre des libertés par rapport aux spécifications et vocaliser des contenus tout de même. C’est pourquoi il faut être parcimonieux.

    Lorsqu’ils sont utiles et nécessaires, utiliser ces attributs pour informer d’un changement. Exemple : donner le nouveau nombre de résultats après utilisation de filtre. Mais il faut bien penser que dans plusieurs cas ils ne sont pas nécessaires, en effet, un bouton d’action doit être explicite avant qu’on appuie dessus. Donc par exemple : lorsque je demande à afficher la page 2 dans une pagination de résultat, il n’y a pas d’info à vocaliser automatiquement, le fait que le bouton dise "afficher la page 2" est suffisant, il serait superflu, voire gênant d’aller plus loin dans ce qui est vocalisé.

    Enfin, il faut gérer le focus clavier. Cet aspect est essentiel dès qu’il y a des interactions ajax, mais il dépend totalement du contexte. Et cet aspect est indépendant de la nécessité d’utiliser les live regions. Exemple : dans une série de filtre on va laisser le focus sur le filtre pour que l’utilisateur puisse parcourir la série de filtre. A contrario, dans une pagination (ou quand on a un bouton de type "afficher plus d’articles", dans des news ou dans une page de produits pour un site de e-commerce) on va déplacer le focus sur le 1er nouveau élément qui vient de s’afficher (la zone qui a été mise à jour).

  • Combine Audio and Images in Stream

    19 décembre 2017, par SenorContento

    I would like to be able to create images on the fly and also create audio on the fly too and be able to combine them together into an rtmp stream (for Twitch or YouTube). The goal is to accomplish this in Python 3 as that is the language my bot is written in. Bonus points for not having to save to disk.

    So far, I have figured out how to stream to rtmp servers using ffmpeg by loading a PNG image and playing it on loop as well as loading a mp3 and then combining them together in the stream. The problem is I have to load at least one of them from file.

    I know I can use Moviepy to create videos, but I cannot figure out whether or not I can stream the video from Moviepy to ffmpeg or directly to rtmp. I think that I have to generate a lot of really short clips and send them, but I want to know if there’s an existing solution.

    There’s also OpenCV which I hear can stream to rtmp, but cannot handle audio.

    A redacted version of an ffmpeg command I have successfully tested with is

    ffmpeg -loop 1 -framerate 15 -i ScreenRover.png -i "Song-Stereo.mp3" -c:v libx264 -preset fast -pix_fmt yuv420p -threads 0 -f flv rtmp://SITE-SUCH-AS-TWITCH/.../STREAM-KEY

    or

    cat Song-Stereo.mp3 | ffmpeg -loop 1 -framerate 15 -i ScreenRover.png -i - -c:v libx264 -preset fast -pix_fmt yuv420p -threads 0 -f flv rtmp://SITE-SUCH-AS-TWITCH/.../STREAM-KEY

    I know these commands are not set up properly for smooth streaming, the result manages to screw up both Twitch’s and Youtube’s player and I will have to figure out how to fix that.

    The problem with this is I don’t think I can stream both the image and the audio at once when creating them on the spot. I have to load one of them from the hard drive. This becomes a problem when trying to react to a command or user chat or anything else that requires live reactions. I also do not want to destroy my hard drive by constantly saving to it.

    As for the python code, what I have tried so far in order to create a video is the following code. This still saves to the HD and is not responsive in realtime, so this is not very useful to me. The video itself is okay, with the one exception that as time passes on, the clock the qr code says versus the video’s clock start to spread apart farther and farther as the video gets closer to the end. I can work around that limitation if it shows up while live streaming.

    def make_frame(t):
     img = qrcode.make("Hello! The second is %s!" % t)
     return numpy.array(img.convert("RGB"))

    clip = mpy.VideoClip(make_frame, duration=120)
    clip.write_gif("test.gif",fps=15)

    gifclip = mpy.VideoFileClip("test.gif")
    gifclip.set_duration(120).write_videofile("test.mp4",fps=15)

    My goal is to be able to produce something along the psuedo-code of

    original_video = qrcode_generator("I don't know, a clock, pyotp, today's news sources, just anything that can be generated on the fly!")
    original_video.overlay_text(0,0,"This is some sample text, the left two are coordinates, the right three are font, size, and color", Times_New_Roman, 12, Blue)
    original_video.add_audio(sine_wave_generator(0,180,2)) # frequency min-max, seconds

    # NOTICE - I did not add any time measurements to the actual video itself. The whole point is this is a live stream and not a video clip, so the time frame would be now. The 2 seconds list above is for our psuedo sine wave generator to know how long the audio clip should be, not for the actual streaming library.

    stream.send_to_rtmp_server(original_video) # Doesn't matter if ffmpeg or some native library

    The above example is what I am looking for in terms of video creation in Python and then streaming. I am not trying to create a clip and then stream it later, I am trying to have the program be able to respond to outside events and then update it’s stream to do whatever it wants. It is sort of like a chat bot, but with video instead of text.

    def track_movement(...):
     ...
     return ...

    original_video = user_submitted_clip(chat.lastVideoMessage)
    original_video.overlay_text(0,0,"The robot watches the user's movements and puts a blue square around it.", Times_New_Roman, 12, Blue)
    original_video.add_audio(sine_wave_generator(0,180,2)) # frequency min-max, seconds

    # It would be awesome if I could also figure out how to perform advance actions such as tracking movements or pulling a face out of a clip and then applying effects to it on the fly. I know OpenCV can track movements and I hear that it can work with streams, but I cannot figure out how that works. Any help would be appreciated! Thanks!

    Because I forgot to add the imports, here are some useful imports I have in my file !

    import pyotp
    import qrcode
    from io import BytesIO
    from moviepy import editor as mpy

    The library, pyotp, is for generating one time pad authenticator codes, qrcode is for the qr codes, BytesIO is used for virtual files, and moviepy is what I used to generate the GIF and MP4. I believe BytesIO might be useful for piping data to the streaming service, but how that happens, depends entirely on how data is sent to the service, whether it be ffmpeg over command line (from subprocess import Popen, PIPE) or it be a native library.

  • Issues with video frame dropout using Accord.NET VideoFileWriter and FFMPEG

    9 janvier 2018, par David

    I am testing out writing video files using the Accord.Video library. I have a WPF project created in Visual Studio 2017, and I have installed Accord.Video.FFMPEG as well as Accord.Video.VFW using Nuget, as well as their dependencies.

    I have created a very simple video to test basic file output. However, I am running into some issues. My goal is to be able to output videos with a variable frame rate, because in the future I will be using this code to input images from a webcam device that will then be saved to a video file, and video from webcams typically has variable frame rates.

    For now, in this example, I am not inputting video from a webcam, but rather I am generating a simple "moving box" image and outputting the frames to a video file. The box changes color every 20 frames : red, green, blue, yellow, and finally white. I also set the frame rate to be 20 fps.

    When I use Accord.Video.VFW, the frame rate is correctly set, and all the frames are correctly outputted to the video file. The resulting video looks like this (see the YouTube link) : https://youtu.be/K8E9O7bJIbg

    This is just a reference, however. I don’t intend on using Accord.Video.VFW because it outputs uncompressed data to an AVI file, and it doesn’t support variable frame rates. I would like to use Accord.Video.FFMPEG because it is supposed to support variable frame rates.

    When I attempt to use the Accord.Video.FFMPEG library, however, the video does not result in how I would expect it to look. Here is a YouTube link : https://youtu.be/cW19yQFUsLI

    As you can see, in that example, the box remains the first color for a longer amount of time than the other colors. It also never reaches the final color (white). When I inspect the video file, 100 frames were not outputted to the file. There are 69 or 73 frames typically. And the expected frame rate and duration obviously do not match up.

    Here is the code that generates both these videos :

    public MainWindow()
    {
       InitializeComponent();

       Accord.Video.VFW.AVIWriter avi_writer = new Accord.Video.VFW.AVIWriter();
       avi_writer.FrameRate = 20;
       avi_writer.Open("test2.avi", 640, 480);

       Accord.Video.FFMPEG.VideoFileWriter k = new Accord.Video.FFMPEG.VideoFileWriter();
       k.FrameRate = 20;
       k.Width = 640;
       k.Height = 480;
       k.Open("test.mp4");
       for (int i = 0; i < 100; i++)
       {
           TimeSpan t = new TimeSpan(0, 0, 0, 0, 50 * i);
           var b = new System.Drawing.Bitmap(640, 480);
           var g = Graphics.FromImage(b);
           var br = System.Drawing.Brushes.Blue;
           if (t.TotalMilliseconds < 1000)
               br = System.Drawing.Brushes.Red;
           else if (t.TotalMilliseconds < 2000)
               br = System.Drawing.Brushes.Green;
           else if (t.TotalMilliseconds < 3000)
               br = System.Drawing.Brushes.Blue;
           else if (t.TotalMilliseconds < 4000)
               br = System.Drawing.Brushes.Yellow;
           else
               br = System.Drawing.Brushes.White;

           g.FillRectangle(br, 50 + i, 50, 100, 100);
           System.Console.WriteLine("Frame: " + (i + 1).ToString() + ", Millis: " + t.TotalMilliseconds.ToString());

           #region This is the code in question

           k.WriteVideoFrame(b, t);
           avi_writer.AddFrame(b);

           #endregion
       }

       avi_writer.Close();
       k.Close();
       System.Console.WriteLine("Finished writing video");
    }

    I have tried changing a few things under the assumption that maybe the "WriteVideoFrame" function isn’t able to finish in time, and so I need to slow down the program so it can complete itself. Under that assumption, I have replaced the "WriteVideoFrame" call with the following code :

    Task taskA = new Task(() => k.WriteVideoFrame(b, t));
    taskA.Start();
    taskA.Wait();

    And I have tried the following code :

    Task.WaitAll(
       Task.Run( () =>
       {
           lock(syncObj)
           {
               k.WriteVideoFrame(b, t);
           }
       }
    ));

    And even just a standard call where I don’t specify a timestamp :

    k.WriteVideoFrame(b);

    None of these work. They all result in something similar.

    Any suggestions on getting the WriteVideoFrame function to work that is a part of the Accord.Video.FFMPEG.VideoFileWriter class ?

    Thanks for any and all help !

    [edits below]

    I have done some more investigating. I still haven’t found a good solution, but here is what I have found so far. After declaring my VideoFileWriter object, I have tried setting up some options for the video.

    When I use an H264 codec with the following options, it correctly saves 100 frames at a frame-rate of 20 fps, however any normal media player (both VLC and Windows Media Player) end up playing a 10-second video instead of a 5-second video. Essentially, it seems like they play it at half-speed. Here is the code that gives that result :

    k.VideoCodec = Accord.Video.FFMPEG.VideoCodec.H264;
    k.VideoOptions["crf"] = "18";
    k.VideoOptions["preset"] = "veryfast";
    k.VideoOptions["tune"] = "zerolatency";
    k.VideoOptions["x264opts"] = "no-mbtree:sliced-threads:sync-lookahead=0";

    Additionally, if I use an Mpeg4 codec, I get the same "half-speed" result :

    k.VideoCodec = Accord.Video.FFMPEG.VideoCodec.Mpeg4;

    However, if I use a WMV codec, then it correctly results in 100 frames at 20 fps, and a 5 second video that is correctly played by both media players :

    k.VideoCodec = Accord.Video.FFMPEG.VideoCodec.Wmv1;

    Although this is good news, this still doesn’t solve the problem because WMV doesn’t support variable frame rates. Also, this still doesn’t answer the question as to why the problem is happening in the first place.

    As always, any help would be appreciated !