Recherche avancée

Médias (91)

Autres articles (88)

  • Qu’est ce qu’un masque de formulaire

    13 juin 2013, par

    Un masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
    Chaque formulaire de publication d’objet peut donc être personnalisé.
    Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
    Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 is the first MediaSPIP stable release.
    Its official release date is June 21, 2013 and is announced here.
    The zip file provided here only contains the sources of MediaSPIP in its standalone version.
    To get a working installation, you must manually install all-software dependencies on the server.
    If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (6595)

  • How to embed subtitles into an mp4 file using gstreamer

    27 août 2021, par Stephen

    My Goal

    


    I'm trying to embed subtitles into an mp4 file using the mp4mux gstreamer element.

    


    What I've tried

    


    The pipeline I would like to use is :

    


    GST_DEBUG=3 gst-launch-1.0 filesrc location=sample-nosub-avc.mp4 ! qtdemux ! queue ! video/x-h264 ! mp4mux name=mux reserved-moov-update-period=1000 ! filesink location=output.mp4 filesrc location=english.srt ! subparse ! queue ! text/x-raw,format=utf8 ! mux.subtitle_0


    


    It just demuxes a sample mp4 file for the h.264 stream and then muxes it together with an srt subtitle file.

    


    The error I get is :

    


    Setting pipeline to PAUSED ...&#xA;0:00:00.009958915 1324869 0x5624a8c7a0a0 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<filesrc0> pad not activated yet&#xA;Pipeline is PREROLLING ...&#xA;0:00:00.010128080 1324869 0x5624a8c53de0 WARN                 basesrc gstbasesrc.c:3072:gst_base_src_loop:<filesrc1> error: Internal data stream error.&#xA;0:00:00.010129102 1324869 0x5624a8c53e40 WARN                 qtdemux qtdemux_types.c:239:qtdemux_type_get: unknown QuickTime node type pasp&#xA;0:00:00.010140810 1324869 0x5624a8c53de0 WARN                 basesrc gstbasesrc.c:3072:gst_base_src_loop:<filesrc1> error: streaming stopped, reason not-negotiated (-4)&#xA;0:00:00.010172990 1324869 0x5624a8c53e40 WARN                 qtdemux qtdemux.c:3237:qtdemux_parse_trex:<qtdemux0> failed to find fragment defaults for stream 1&#xA;ERROR: from element /GstPipeline:pipeline0/GstFileSrc:filesrc1: Internal data stream error.&#xA;Additional debug info:&#xA;gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline0/GstFileSrc:filesrc1:&#xA;streaming stopped, reason not-negotiated (-4)&#xA;ERROR: pipeline doesn&#x27;t want to preroll.&#xA;Setting pipeline to NULL ...&#xA;Freeing pipeline ...&#xA;</qtdemux0></filesrc1></filesrc1></filesrc0>

    &#xA;

    My Thoughts

    &#xA;

    I believe the issue is not related to the above warning but rather mp4mux's incompatibility with srt subtitles.

    &#xA;

    The reason I belive this is because, other debug logs hint at it, but also stealing the subititles from another mp4 file and muxing it back together does work.

    &#xA;

    gst-launch-1.0  filesrc location=sample-nosub-avc.mp4 ! qtdemux ! mp4mux name=mux ! filesink location=output.mp4 filesrc location=sample-with-subs.mp4 ! qtdemux name=demux demux.subtitle_1 ! text/x-raw,format=utf8 ! queue ! mux.subtitle_0&#xA;

    &#xA;

    A major catch 22 I am having is that mp4 files don't typically support srt subtitles, but gstreamer's subparse element doesn't support parsing mp4 subtitle formats (tx3g, ttxt, etc.) so I'm not sure how I'm meant to put it all together.

    &#xA;

    I'm very sorry for the lengthy question but I've tried many things so it was difficult to condense it. Any hints or help is appreciated. Thank you.

    &#xA;

  • ios video after trimming then play on non ios device audio/video out of sync

    31 août 2015, par gavinHe

    trimming video,then I send the video trimmed to android device and play,I find audio/video out of sync, the audio is several seconds behind the video. but the video can play normal on iOS device.
    1.I trim video with codes like this :

    - (IBAction)showTrimmedVideo:(UIButton *)sender
    {
    [self deleteTmpFile];

    NSURL *videoFileUrl = [NSURL fileURLWithPath:self.originalVideoPath];

    AVAsset *anAsset = [[AVURLAsset alloc] initWithURL:videoFileUrl options:nil];
    NSArray *compatiblePresets = [AVAssetExportSession exportPresetsCompatibleWithAsset:anAsset];
    if ([compatiblePresets containsObject:AVAssetExportPresetMediumQuality]) {

       self.exportSession = [[AVAssetExportSession alloc]
                             initWithAsset:anAsset presetName:AVAssetExportPresetHighestQuality];
       // Implementation continues.

       NSURL *furl = [NSURL fileURLWithPath:self.tmpVideoPath];

       self.exportSession.outputURL = furl;
       self.exportSession.outputFileType = AVFileTypeMPEG4;

       CMTime start = CMTimeMakeWithSeconds(self.startTime, anAsset.duration.timescale);
       CMTime duration = CMTimeMakeWithSeconds(self.stopTime-self.startTime, anAsset.duration.timescale);
       CMTimeRange range = CMTimeRangeMake(start, duration);
       self.exportSession.timeRange = range;

       self.trimBtn.hidden = YES;
       self.myActivityIndicator.hidden = NO;
       [self.myActivityIndicator startAnimating];
       [self.exportSession exportAsynchronouslyWithCompletionHandler:^{

           switch ([self.exportSession status]) {
               case AVAssetExportSessionStatusFailed:
                   NSLog(@"Export failed: %@", [[self.exportSession error] localizedDescription]);
                   break;
               case AVAssetExportSessionStatusCancelled:
                   NSLog(@"Export canceled");
                   break;
               default:
                   NSLog(@"NONE");
                   dispatch_async(dispatch_get_main_queue(), ^{
                       [self.myActivityIndicator stopAnimating];
                       self.myActivityIndicator.hidden = YES;
                       self.trimBtn.hidden = NO;
                       [self playMovie:self.tmpVideoPath];
                   });
                   break;
           }
       }];
    }
    }

    2.I send the video trimmed to server,then android device get video from server,but they find audio/video out of sync,at first I consider of server do something wrong,so I just send video to android device with USB,the error still exist.

    3.so I analyze the trimmed video by ffmpeg tools :
    ffmpeg -i trimVideo.mp4
    then I find trimVideo.mp4 start is a negative number.
    here is what ffmpeg print :

    Metadata :
    major_brand : qt
    minor_version : 0
    compatible_brands : qt
    creation_time : 2015-08-29 12:22:13
    encoder : Lavf56.15.102
    Duration : 00:02:21.77, start : -4.692568, bitrate : 359 kb/s
    Stream #0:0(und) : Audio : aac (LC) (mp4a / 0x6134706D), 24000 Hz, stereo, fltp, 69 kb/s (default)
    Metadata :
    creation_time : 2015-08-29 12:22:13
    handler_name : Core Media Data Handler
    Stream #0:1(und) : Video : h264 (High) (avc1 / 0x31637661), yuv420p, 512x288 [SAR 1:1 DAR 16:9], 277 kb/s, 15.16 fps, 15.17 tbr, 12136 tbn, 30.34 tbc (default)
    Metadata :
    creation_time : 2015-08-29 12:22:13
    handler_name : Core Media Data Handler
    encoder : ’avc1’

    I have been puzzled by this bug for several days, I am sorry of my bad english and I really need your help,thanks.

  • Extract audio from video using autogen ffmpeg C# in Unity

    5 décembre 2024, par Johan Sophie

    Hi I'm using ffmpeg autogen to extract audio from video in Unity, but when I following this code, the file write cannot write, it's 0Kb, so what's issue of this or someone have any examples for extract audio using this library, apologize for my English. This is github of library : &#xA;https://github.com/Ruslan-B/FFmpeg.AutoGen

    &#xA;&#xA;

    unsafe void TestExtractAudio()&#xA;{&#xA;&#xA;    string inFile = Application.streamingAssetsPath &#x2B; "/" &#x2B; strFileName;&#xA;    string outFile = Application.streamingAssetsPath &#x2B; "/" &#x2B; strFileNameAudio;&#xA;&#xA;    AVOutputFormat* outFormat = null;&#xA;    AVFormatContext* inFormatContext = null;&#xA;    AVFormatContext* outFormatContext = null;&#xA;    AVPacket packet;&#xA;&#xA;    ffmpeg.av_register_all();&#xA;&#xA;    inFormatContext = ffmpeg.avformat_alloc_context();&#xA;    outFormatContext = ffmpeg.avformat_alloc_context();&#xA;&#xA;    if (ffmpeg.avformat_open_input(&amp;inFormatContext, inFile, null, null) &lt; 0)&#xA;    {&#xA;        throw new ApplicationException("Could not open input file.");&#xA;    }&#xA;&#xA;    if (ffmpeg.avformat_find_stream_info(inFormatContext, null) &lt; 0)&#xA;    {&#xA;        throw new ApplicationException("Failed to retrieve input stream info.");&#xA;    }&#xA;&#xA;    ffmpeg.avformat_alloc_output_context2(&amp;outFormatContext, null, null, outFile);&#xA;    if (outFormatContext == null)&#xA;    {&#xA;        throw new ApplicationException("Could not create output context");&#xA;    }&#xA;&#xA;    outFormat = outFormatContext->oformat;&#xA;&#xA;    AVStream* inStream = inFormatContext->streams[1];&#xA;    AVStream* outStream = ffmpeg.avformat_new_stream(outFormatContext, inStream->codec->codec);&#xA;    if (outStream == null)&#xA;    {&#xA;        throw new ApplicationException("Failed to allocate output stream.");&#xA;    }&#xA;&#xA;    if (ffmpeg.avcodec_copy_context(outStream->codec, inStream->codec) &lt; 0)&#xA;    {&#xA;        throw new ApplicationException("Couldn&#x27;t copy input stream codec context to output stream codec context");&#xA;    }&#xA;&#xA;    outFormatContext->audio_codec_id = AVCodecID.AV_CODEC_ID_MP3;&#xA;&#xA;    int retcode = ffmpeg.avio_open(&amp;outFormatContext->pb, outFile, ffmpeg.AVIO_FLAG_WRITE);&#xA;    if (retcode &lt; 0)&#xA;    {&#xA;        throw new ApplicationException("Couldn&#x27;t open output file");&#xA;    }&#xA;&#xA;    int returnCode = ffmpeg.avformat_write_header(outFormatContext, null);&#xA;&#xA;    if (returnCode &lt; 0)&#xA;    {&#xA;        throw new ApplicationException("Error occurred opening output file.");&#xA;    }&#xA;&#xA;    while (true)&#xA;    {&#xA;        if (ffmpeg.av_read_frame(inFormatContext, &amp;packet) &lt; 0)&#xA;        {&#xA;            break;&#xA;        }&#xA;&#xA;        if (packet.stream_index == 1)&#xA;        {&#xA;&#xA;            inStream = inFormatContext->streams[1];&#xA;            outStream = outFormatContext->streams[0];&#xA;&#xA;            // TODO: Replicate log packet functionality to print out what&#x27;s inside the packet.&#xA;&#xA;            packet.pts = ffmpeg.av_rescale_q_rnd(packet.pts, inStream->time_base, outStream->time_base,&#xA;                AVRounding.AV_ROUND_NEAR_INF | AVRounding.AV_ROUND_PASS_MINMAX);&#xA;            packet.dts = ffmpeg.av_rescale_q_rnd(packet.dts, inStream->time_base, outStream->time_base,&#xA;                AVRounding.AV_ROUND_NEAR_INF | AVRounding.AV_ROUND_PASS_MINMAX);&#xA;&#xA;            packet.duration = ffmpeg.av_rescale_q(packet.duration, inStream->time_base, outStream->time_base);&#xA;&#xA;            int returncode = ffmpeg.av_interleaved_write_frame(outFormatContext, &amp;packet);&#xA;&#xA;        }&#xA;&#xA;        ffmpeg.av_packet_unref(&amp;packet);&#xA;    }&#xA;&#xA;    ffmpeg.av_write_trailer(outFormatContext);&#xA;&#xA;&#xA;    ffmpeg.avformat_close_input(&amp;inFormatContext);&#xA;&#xA;    ffmpeg.avformat_free_context(outFormatContext);&#xA;&#xA;    Console.WriteLine("Press any key to continue...");&#xA;&#xA;    Console.ReadKey();&#xA;}&#xA;

    &#xA;&#xA;

    the value returnCode return less than 0, so someone can fix this, thanks so much for that

    &#xA;