Recherche avancée

Médias (0)

Mot : - Tags -/masques

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (112)

  • Selection of projects using MediaSPIP

    2 mai 2011, par

    The examples below are representative elements of MediaSPIP specific uses for specific projects.
    MediaSPIP farm @ Infini
    The non profit organizationInfini develops hospitality activities, internet access point, training, realizing innovative projects in the field of information and communication technologies and Communication, and hosting of websites. It plays a unique and prominent role in the Brest (France) area, at the national level, among the half-dozen such association. Its members (...)

  • Modifier la date de publication

    21 juin 2013, par

    Comment changer la date de publication d’un média ?
    Il faut au préalable rajouter un champ "Date de publication" dans le masque de formulaire adéquat :
    Administrer > Configuration des masques de formulaires > Sélectionner "Un média"
    Dans la rubrique "Champs à ajouter, cocher "Date de publication "
    Cliquer en bas de la page sur Enregistrer

  • Prérequis à l’installation

    31 janvier 2010, par

    Préambule
    Cet article n’a pas pour but de détailler les installations de ces logiciels mais plutôt de donner des informations sur leur configuration spécifique.
    Avant toute chose SPIPMotion tout comme MediaSPIP est fait pour tourner sur des distributions Linux de type Debian ou dérivées (Ubuntu...). Les documentations de ce site se réfèrent donc à ces distributions. Il est également possible de l’utiliser sur d’autres distributions Linux mais aucune garantie de bon fonctionnement n’est possible.
    Il (...)

Sur d’autres sites (4214)

  • FFMpeg copy streams without transcode

    8 juin 2016, par Zelid

    I’m trying to copy all streams from several files into one file without transcoding streams. Something you usually do with ffmpeg utility by ffmpeg -i “file_with_audio.mp4” -i “file_with_video.mp4” -c copy -shortest file_with_audio_and_video.mp4

    This is the code :

    int ffmpegOpenInputFile(const char* filename, AVFormatContext **ic) {

       int ret;
       unsigned int i;

       *ic = avformat_alloc_context();
       if (!(*ic))
           return -1; // Couldn't allocate input context

       if((ret = avformat_open_input(ic, filename, NULL, NULL)) < 0)
           return ret; // Couldn't open file

       // Get format info (retrieve stream information)
       if ((ret = avformat_find_stream_info(*ic, NULL)) < 0)
           return ret; // Couldn't find stream information

       for (int i = 0; i < (*ic)->nb_streams; i++) {
           AVStream *stream;
           AVCodecContext *codec_ctx;
           stream = (*ic)->streams[i];
           codec_ctx = stream->codec;
           /* Reencode video & audio and remux subtitles etc. */
           if (codec_ctx->codec_type == AVMEDIA_TYPE_VIDEO
               || codec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) {
               /* Open decoder */
               ret = avcodec_open2(codec_ctx,
                                   avcodec_find_decoder(codec_ctx->codec_id), NULL);
               if (ret < 0) {
                   av_log(NULL, AV_LOG_ERROR, "Failed to open decoder for stream #%u\n", i);
                   return ret;
               }
           }
       }

       // Dump information about file onto standard error
       av_dump_format(*ic, 0, filename, 0);

       return 0;
    }



    int main(int argc, char *argv[]) {

       const char *inputFilename1 = "/avfiles/video_input.mp4";
       const char *inputFilename2 = "/avfiles/audio_input.mp4";
       const char *filename = "/avfiles/out.mp4";

       int ret;

       av_register_all();

       AVFormatContext *ic1 = nullptr;
       AVFormatContext *ic2 = nullptr;
       AVFormatContext *oc = nullptr;

       if ((ret = ffmpegOpenInputFile(inputFilename1, &ic1)) < 0)
           return ret;  // and free resources and

       if ((ret = ffmpegOpenInputFile(inputFilename2, &ic2)) < 0)
           return ret;  // and free resources and

       AVOutputFormat *outfmt = av_guess_format(NULL, filename, NULL);
       if (outfmt == NULL)
           return -1;  // Could not guess output format

       avformat_alloc_output_context2(&oc, outfmt, NULL, filename);
       if (!oc)
           return AVERROR_UNKNOWN;  // Could not create output context

       // populate input streams from all input files
       AVStream **input_streams = NULL;
       int nb_input_streams = 0;
       for (int i = 0; i < ic1->nb_streams; i++) {
           input_streams = (AVStream **) grow_array(input_streams, sizeof(*input_streams), &nb_input_streams,
                                                    nb_input_streams + 1);
           input_streams[nb_input_streams - 1] = ic1->streams[i];
       }
       for (int i = 0; i < ic2->nb_streams; i++) {
           input_streams = (AVStream **) grow_array(input_streams, sizeof(*input_streams), &nb_input_streams,
                                                    nb_input_streams + 1);
           input_streams[nb_input_streams - 1] = ic2->streams[i];
       }

       for (int i = 0; i < nb_input_streams; i++) {
           AVStream *ist = input_streams[i];  // could be named 'm_in_vid_strm'

           // if output context has video codec support and current input stream is video
           if (/*oc->video_codec_id*/ oc->oformat->video_codec != AV_CODEC_ID_NONE && ist != NULL
                                      && ist->codec->codec_type == AVMEDIA_TYPE_VIDEO) {

               AVCodec *out_vid_codec = avcodec_find_encoder(oc->oformat->video_codec);
               if (NULL == out_vid_codec)
                   return -1;  // Couldn't find video encoder

               AVStream *m_out_vid_strm = avformat_new_stream(oc, out_vid_codec);
               if (NULL == m_out_vid_strm)
                   return -1;  // Couldn't output video stream

               m_out_vid_strm->id = 0;  // XXX:

               ret = avcodec_copy_context(m_out_vid_strm->codec, ist->codec);
               if (ret < 0)
                   return ret;  // Failed to copy context

           }

           // if output context has audio codec support and current input stream is audio
           if (/*oc->audio_codec_id*/ oc->oformat->audio_codec != AV_CODEC_ID_NONE && ist != NULL
                                      && ist->codec->codec_type == AVMEDIA_TYPE_AUDIO) {

               AVCodec *out_aud_codec = avcodec_find_encoder(oc->oformat->audio_codec);
               if (nullptr == out_aud_codec)
                   return -1;  // couldn't find audio codec

               AVStream *m_out_aud_strm = avformat_new_stream(oc, out_aud_codec);
               if (nullptr == m_out_aud_strm)
                   return -1;  // couldn't allocate audio out stream

               ret = avcodec_copy_context(m_out_aud_strm->codec, ist->codec);
               if (ret < 0)
                   return ret;  // couldn't copy context

           }
       }

       // finally output header
       if (!(oc->flags & AVFMT_NOFILE)) {

           ret = avio_open(&oc->pb, filename, AVIO_FLAG_WRITE);
           if (ret < 0)
               return ret;  // Could not open output file

           av_dump_format(oc, 0, filename, 1);

           ret = avformat_write_header(oc, NULL);
           if (ret < 0)
               return ret; // Error occurred when opening output file

       }

       return 0;

    }

    avformat_write_header(oc, NULL); always return error and I see this messages :

    [mp4 @ 0x7f84ec900a00] Using AVStream.codec.time_base as a timebase hint to the muxer is deprecated. Set AVStream.time_base instead.
    [mp4 @ 0x7f84ec900a00] Tag avc1/0x31637661 incompatible with output codec id '28' ([33][0][0][0])

    But input and output streams match :

    Input streams from 2 files:
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 2834 kb/s, 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc (default)
    Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 125 kb/s (default)

    Output #0, mp4, to '/Users/alex/Workspace/_qt/tubisto/avfiles/out.mp4':
       Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 2834 kb/s, 47.95 tbc
       Stream #0:1: Audio: aac (libvo_aacenc) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 125 kb/s

    Why the error with incompatible output codec happens ?
    What is wrong in my code and how to make it work to copy all streams from all input files to output file ?

  • Combine Audio and Images in Stream

    19 décembre 2017, par SenorContento

    I would like to be able to create images on the fly and also create audio on the fly too and be able to combine them together into an rtmp stream (for Twitch or YouTube). The goal is to accomplish this in Python 3 as that is the language my bot is written in. Bonus points for not having to save to disk.

    So far, I have figured out how to stream to rtmp servers using ffmpeg by loading a PNG image and playing it on loop as well as loading a mp3 and then combining them together in the stream. The problem is I have to load at least one of them from file.

    I know I can use Moviepy to create videos, but I cannot figure out whether or not I can stream the video from Moviepy to ffmpeg or directly to rtmp. I think that I have to generate a lot of really short clips and send them, but I want to know if there’s an existing solution.

    There’s also OpenCV which I hear can stream to rtmp, but cannot handle audio.

    A redacted version of an ffmpeg command I have successfully tested with is

    ffmpeg -loop 1 -framerate 15 -i ScreenRover.png -i "Song-Stereo.mp3" -c:v libx264 -preset fast -pix_fmt yuv420p -threads 0 -f flv rtmp://SITE-SUCH-AS-TWITCH/.../STREAM-KEY

    or

    cat Song-Stereo.mp3 | ffmpeg -loop 1 -framerate 15 -i ScreenRover.png -i - -c:v libx264 -preset fast -pix_fmt yuv420p -threads 0 -f flv rtmp://SITE-SUCH-AS-TWITCH/.../STREAM-KEY

    I know these commands are not set up properly for smooth streaming, the result manages to screw up both Twitch’s and Youtube’s player and I will have to figure out how to fix that.

    The problem with this is I don’t think I can stream both the image and the audio at once when creating them on the spot. I have to load one of them from the hard drive. This becomes a problem when trying to react to a command or user chat or anything else that requires live reactions. I also do not want to destroy my hard drive by constantly saving to it.

    As for the python code, what I have tried so far in order to create a video is the following code. This still saves to the HD and is not responsive in realtime, so this is not very useful to me. The video itself is okay, with the one exception that as time passes on, the clock the qr code says versus the video’s clock start to spread apart farther and farther as the video gets closer to the end. I can work around that limitation if it shows up while live streaming.

    def make_frame(t):
     img = qrcode.make("Hello! The second is %s!" % t)
     return numpy.array(img.convert("RGB"))

    clip = mpy.VideoClip(make_frame, duration=120)
    clip.write_gif("test.gif",fps=15)

    gifclip = mpy.VideoFileClip("test.gif")
    gifclip.set_duration(120).write_videofile("test.mp4",fps=15)

    My goal is to be able to produce something along the psuedo-code of

    original_video = qrcode_generator("I don't know, a clock, pyotp, today's news sources, just anything that can be generated on the fly!")
    original_video.overlay_text(0,0,"This is some sample text, the left two are coordinates, the right three are font, size, and color", Times_New_Roman, 12, Blue)
    original_video.add_audio(sine_wave_generator(0,180,2)) # frequency min-max, seconds

    # NOTICE - I did not add any time measurements to the actual video itself. The whole point is this is a live stream and not a video clip, so the time frame would be now. The 2 seconds list above is for our psuedo sine wave generator to know how long the audio clip should be, not for the actual streaming library.

    stream.send_to_rtmp_server(original_video) # Doesn't matter if ffmpeg or some native library

    The above example is what I am looking for in terms of video creation in Python and then streaming. I am not trying to create a clip and then stream it later, I am trying to have the program be able to respond to outside events and then update it’s stream to do whatever it wants. It is sort of like a chat bot, but with video instead of text.

    def track_movement(...):
     ...
     return ...

    original_video = user_submitted_clip(chat.lastVideoMessage)
    original_video.overlay_text(0,0,"The robot watches the user's movements and puts a blue square around it.", Times_New_Roman, 12, Blue)
    original_video.add_audio(sine_wave_generator(0,180,2)) # frequency min-max, seconds

    # It would be awesome if I could also figure out how to perform advance actions such as tracking movements or pulling a face out of a clip and then applying effects to it on the fly. I know OpenCV can track movements and I hear that it can work with streams, but I cannot figure out how that works. Any help would be appreciated! Thanks!

    Because I forgot to add the imports, here are some useful imports I have in my file !

    import pyotp
    import qrcode
    from io import BytesIO
    from moviepy import editor as mpy

    The library, pyotp, is for generating one time pad authenticator codes, qrcode is for the qr codes, BytesIO is used for virtual files, and moviepy is what I used to generate the GIF and MP4. I believe BytesIO might be useful for piping data to the streaming service, but how that happens, depends entirely on how data is sent to the service, whether it be ffmpeg over command line (from subprocess import Popen, PIPE) or it be a native library.

  • Revision 36454 : uniformiser les inputs du formulaire de login

    19 mars 2010, par brunobergot@… — Log

    uniformiser les inputs du formulaire de login