Recherche avancée

Médias (2)

Mot : - Tags -/documentation

Autres articles (39)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (7256)

  • ffmpeg scale doesn't work if destination res is larger than source

    21 décembre 2017, par Vibok

    I just tried ffmpeg filter scale=320x180:force_original_aspect_ratio=disable to resize a 270x480 video into a 320x180 video, and it still keeps aspect ratio. force_original_aspect_ratio=disable got ignored.

    I guess the problem is that the destination width is bigger than the source width, while the destination height is smaller. Because it worked for other videos, even without force_original_aspect_ratio=disable.

    The resulted file is also weird. It says it’s 320x180, while its weight is obliviously smaller than that.

    Here are the video files, original and resized. https://drive.google.com/file/d/1UNXlfwpzoizhx7WOjqn44mlcbQgacOHS/view?usp=sharing

    Here is my command :

    ffmpeg -i 480P_600K_107047752.mp4 -force_key_frames 00:00:03.000 -filter_complex [0:v]scale=320x180:force_original_aspect_ratio=disable,fps=30[vid];[vid]
  • Slow, robotic audio encoding with Humble-Video api (ffmpeg)

    30 novembre 2017, par Walker Knapp

    I have a program that is trying to parse pcm_s16le audio samples from a .wav file and encode it into mp3 using the Humble-Video api.
    This isn’t what the final program is trying to do, but it outlines the problem I’m encountering.
    The issue is that the output audio files sound robotic and slow.

    input.wav (Just some random audio from a video game, ignore the wonky size headers) : https://drive.google.com/file/d/1nQOJGIxoSBDzprXExyTVNyyipSKQjyU0/view?usp=sharing

    output.mp3 :
    https://drive.google.com/file/d/1MfEFw2V7TiKS16SqSTv3wrbh6KoankIj/view?usp=sharing

    output.wav : https://drive.google.com/file/d/1XtDdCtYao0kS0Qe2l6JGu1tC5xvqt62f/view?usp=sharing

    import io.humble.video.*;

    import java.io.*;

    public class AudioEncodingTest {

       private static AudioChannel.Layout inLayout = AudioChannel.Layout.CH_LAYOUT_STEREO;
       private static int inSampleRate = 44100;
       private static AudioFormat.Type inFormat = AudioFormat.Type.SAMPLE_FMT_S16;
       private static int bytesPerSample = 2;

       private static File inFile = new File("input.wav");

       public static void main(String[] args) throws IOException, InterruptedException {
           render("output.mp3");
           render("output.wav");
       }

       public static void render(String filename) throws IOException, InterruptedException {

           //Starting everything up.

           Muxer muxer = Muxer.make(new File(filename).getAbsolutePath(), null, null);
           Codec codec = Codec.guessEncodingCodec(muxer.getFormat(), null, null, null, MediaDescriptor.Type.MEDIA_AUDIO);

           AudioFormat.Type findType = null;
           for(AudioFormat.Type type : codec.getSupportedAudioFormats()) {
               if(findType == null) {
                   findType = type;
               }
               if(type == inFormat) {
                   findType = type;
                   break;
               }
           }

           if(findType == null){
               throw new IllegalArgumentException("Couldn't find valid audio format for codec: " + codec.getName());
           }

           Encoder encoder = Encoder.make(codec);
           encoder.setSampleRate(44100);
           encoder.setTimeBase(Rational.make(1, 44100));
           encoder.setChannels(2);
           encoder.setChannelLayout(AudioChannel.Layout.CH_LAYOUT_STEREO);
           encoder.setSampleFormat(findType);
           encoder.setFlag(Coder.Flag.FLAG_GLOBAL_HEADER, true);

           encoder.open(null, null);
           muxer.addNewStream(encoder);
           muxer.open(null, null);

           MediaPacket audioPacket = MediaPacket.make();
           MediaAudioResampler audioResampler = MediaAudioResampler.make(encoder.getChannelLayout(), encoder.getSampleRate(), encoder.getSampleFormat(), inLayout, inSampleRate, inFormat);
           audioResampler.open();

           MediaAudio rawAudio = MediaAudio.make(1024/bytesPerSample, inSampleRate, 2, inLayout, inFormat);
           rawAudio.setTimeBase(Rational.make(1, inSampleRate));

           //Reading

           try(BufferedInputStream reader = new BufferedInputStream(new FileInputStream(inFile))){
               reader.skip(44);

               int totalSamples = 0;

               byte[] buffer = new byte[1024];
               int readLength;
               while((readLength = reader.read(buffer, 0, 1024)) != -1){
                   int sampleCount = readLength/bytesPerSample;

                   rawAudio.getData(0).put(buffer, 0, 0, readLength);
                   rawAudio.setNumSamples(sampleCount);
                   rawAudio.setTimeStamp(totalSamples);

                   totalSamples += sampleCount;

                   rawAudio.setComplete(true);

                   MediaAudio usedAudio = rawAudio;

                   if(encoder.getChannelLayout() != inLayout ||
                           encoder.getSampleRate() != inSampleRate ||
                           encoder.getSampleFormat() != inFormat){
                           usedAudio = MediaAudio.make(
                                   sampleCount,
                                   encoder.getSampleRate(),
                                   encoder.getChannels(),
                                   encoder.getChannelLayout(),
                                   encoder.getSampleFormat());
                           audioResampler.resample(usedAudio, rawAudio);
                   }

                   do{
                       encoder.encodeAudio(audioPacket, usedAudio);
                       if(audioPacket.isComplete()) {
                           muxer.write(audioPacket, false);
                       }
                   } while (audioPacket.isComplete());
               }
           }
           catch (IOException e){
               e.printStackTrace();
               muxer.close();
               System.exit(-1);
           }

           muxer.close();

       }
    }

    Edit

    I’ve gotten wave file exporting to work, however mp3s remain the same, which is very confusing. I changed the section counting how many samples each buffer of bytes is.

    MediaAudio rawAudio = MediaAudio.make(1024, inSampleRate, channels, inLayout, inFormat);
       rawAudio.setTimeBase(Rational.make(1, inSampleRate));

       //Reading

       try(BufferedInputStream reader = new BufferedInputStream(new FileInputStream(inFile))){
           reader.skip(44);

           int totalSamples = 0;

           byte[] buffer = new byte[1024 * bytesPerSample * channels];
           int readLength;
           while((readLength = reader.read(buffer, 0, 1024 * bytesPerSample * channels)) != -1){
               int sampleCount = readLength/(bytesPerSample * channels);

               rawAudio.getData(0).put(buffer, 0, 0, readLength);
               rawAudio.setNumSamples(sampleCount);
               rawAudio.setTimeStamp(totalSamples);
  • Combine Audio and Images in Stream

    19 décembre 2017, par SenorContento

    I would like to be able to create images on the fly and also create audio on the fly too and be able to combine them together into an rtmp stream (for Twitch or YouTube). The goal is to accomplish this in Python 3 as that is the language my bot is written in. Bonus points for not having to save to disk.

    So far, I have figured out how to stream to rtmp servers using ffmpeg by loading a PNG image and playing it on loop as well as loading a mp3 and then combining them together in the stream. The problem is I have to load at least one of them from file.

    I know I can use Moviepy to create videos, but I cannot figure out whether or not I can stream the video from Moviepy to ffmpeg or directly to rtmp. I think that I have to generate a lot of really short clips and send them, but I want to know if there’s an existing solution.

    There’s also OpenCV which I hear can stream to rtmp, but cannot handle audio.

    A redacted version of an ffmpeg command I have successfully tested with is

    ffmpeg -loop 1 -framerate 15 -i ScreenRover.png -i "Song-Stereo.mp3" -c:v libx264 -preset fast -pix_fmt yuv420p -threads 0 -f flv rtmp://SITE-SUCH-AS-TWITCH/.../STREAM-KEY

    or

    cat Song-Stereo.mp3 | ffmpeg -loop 1 -framerate 15 -i ScreenRover.png -i - -c:v libx264 -preset fast -pix_fmt yuv420p -threads 0 -f flv rtmp://SITE-SUCH-AS-TWITCH/.../STREAM-KEY

    I know these commands are not set up properly for smooth streaming, the result manages to screw up both Twitch’s and Youtube’s player and I will have to figure out how to fix that.

    The problem with this is I don’t think I can stream both the image and the audio at once when creating them on the spot. I have to load one of them from the hard drive. This becomes a problem when trying to react to a command or user chat or anything else that requires live reactions. I also do not want to destroy my hard drive by constantly saving to it.

    As for the python code, what I have tried so far in order to create a video is the following code. This still saves to the HD and is not responsive in realtime, so this is not very useful to me. The video itself is okay, with the one exception that as time passes on, the clock the qr code says versus the video’s clock start to spread apart farther and farther as the video gets closer to the end. I can work around that limitation if it shows up while live streaming.

    def make_frame(t):
     img = qrcode.make("Hello! The second is %s!" % t)
     return numpy.array(img.convert("RGB"))

    clip = mpy.VideoClip(make_frame, duration=120)
    clip.write_gif("test.gif",fps=15)

    gifclip = mpy.VideoFileClip("test.gif")
    gifclip.set_duration(120).write_videofile("test.mp4",fps=15)

    My goal is to be able to produce something along the psuedo-code of

    original_video = qrcode_generator("I don't know, a clock, pyotp, today's news sources, just anything that can be generated on the fly!")
    original_video.overlay_text(0,0,"This is some sample text, the left two are coordinates, the right three are font, size, and color", Times_New_Roman, 12, Blue)
    original_video.add_audio(sine_wave_generator(0,180,2)) # frequency min-max, seconds

    # NOTICE - I did not add any time measurements to the actual video itself. The whole point is this is a live stream and not a video clip, so the time frame would be now. The 2 seconds list above is for our psuedo sine wave generator to know how long the audio clip should be, not for the actual streaming library.

    stream.send_to_rtmp_server(original_video) # Doesn't matter if ffmpeg or some native library

    The above example is what I am looking for in terms of video creation in Python and then streaming. I am not trying to create a clip and then stream it later, I am trying to have the program be able to respond to outside events and then update it’s stream to do whatever it wants. It is sort of like a chat bot, but with video instead of text.

    def track_movement(...):
     ...
     return ...

    original_video = user_submitted_clip(chat.lastVideoMessage)
    original_video.overlay_text(0,0,"The robot watches the user's movements and puts a blue square around it.", Times_New_Roman, 12, Blue)
    original_video.add_audio(sine_wave_generator(0,180,2)) # frequency min-max, seconds

    # It would be awesome if I could also figure out how to perform advance actions such as tracking movements or pulling a face out of a clip and then applying effects to it on the fly. I know OpenCV can track movements and I hear that it can work with streams, but I cannot figure out how that works. Any help would be appreciated! Thanks!

    Because I forgot to add the imports, here are some useful imports I have in my file !

    import pyotp
    import qrcode
    from io import BytesIO
    from moviepy import editor as mpy

    The library, pyotp, is for generating one time pad authenticator codes, qrcode is for the qr codes, BytesIO is used for virtual files, and moviepy is what I used to generate the GIF and MP4. I believe BytesIO might be useful for piping data to the streaming service, but how that happens, depends entirely on how data is sent to the service, whether it be ffmpeg over command line (from subprocess import Popen, PIPE) or it be a native library.