Recherche avancée

Médias (1)

Mot : - Tags -/ticket

Autres articles (61)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (7462)

  • Add silence to wav file at specific time using ffmpeg [duplicate]

    27 mars 2017, par Syed Armaan Hussain

    This question already has an answer here :

    I have an audio (.wav) file . i want to add silence at specific time
    For example
    i have an audio (.wav) file of 60 sec duration i want to insert silence at 14th sec to 18th second which overlay the current audio not increase the duration.
    I am looking for a FFMPEG command for this .. but not any luck yet .

  • ffmpeg muxing time getting an error

    5 février 2017, par Asif Shariar

    i am getting error when i muxing a file ,i try to mux a file i use muxing example , but i always geting an error in ret does not meet ffmpeg ptr value ,

    is use this code

    OutputStream video_st = { 0 }, audio_st = { 0 };
    const char *filename;
    AVOutputFormat *fmt;
    AVFormatContext *oc;
    AVCodec *audio_codec = nullptr, *video_codec = nullptr;
    int ret;
    int have_video = 0, have_audio = 0;
    int encode_video = 0, encode_audio = 0;
    AVDictionary *opt = NULL;
    int i;

    /* Initialize libavcodec, and register all codecs and formats. */
    av_register_all();



    filename = "C:/Users/Admin/Desktop/Videotest/_test.mp4";

    /* allocate the output media context */
    avformat_alloc_output_context2(&oc, NULL, NULL, filename);
    if (!oc) {
       printf("Could not deduce output format from file extension: using MPEG.\n");
       avformat_alloc_output_context2(&oc, NULL, "mpeg", filename);
    }
    if (!oc) {
       exit;
    }


    fmt = oc->oformat;

    /* Add the audio and video streams using the default format codecs
    * and initialize the codecs. */
    if (fmt->video_codec != AV_CODEC_ID_NONE) {
       add_stream(&video_st, oc, &video_codec, fmt->video_codec);
       have_video = 1;
       encode_video = 1;
    }
    if (fmt->audio_codec != AV_CODEC_ID_NONE) {
       add_stream(&audio_st, oc, &audio_codec, fmt->audio_codec);
       have_audio = 1;
       encode_audio = 1;
    }

    /* Now that all the parameters are set, we can open the audio and
    * video codecs and allocate the necessary encode buffers. */
    if (have_video)
       open_video(oc, video_codec, &video_st, opt);

    if (have_audio)
       open_audio(oc, audio_codec, &audio_st, opt);

    av_dump_format(oc, 0, filename, 1);

    /* open the output file, if needed */
    if (!(fmt->flags & AVFMT_NOFILE)) {
       ret = avio_open(&oc->pb, filename, AVIO_FLAG_WRITE);
       if (ret < 0) {
           exit;
       }
    }

    /* Write the stream header, if any. */
    ret = avformat_write_header(oc, &opt);
    if (ret < 0) {
       exit;
    }

    while (encode_video || encode_audio) {
       /* select the stream to encode */
       if (encode_video &&
           (!encode_audio || av_compare_ts(video_st.next_pts, video_st.enc->time_base,
               audio_st.next_pts, audio_st.enc->time_base) <= 0)) {
           encode_video = !write_video_frame(oc, &video_st);
       }
       else {
           encode_audio = !write_audio_frame(oc, &audio_st);
       }
    }

    /* Write the trailer, if any. The trailer must be written before you
    * close the CodecContexts open when you wrote the header; otherwise
    * av_write_trailer() may try to use memory that was freed on
    * av_codec_close(). */
    av_write_trailer(oc);

    /* Close each codec. */
    if (have_video)
       close_stream(oc, &video_st);
    if (have_audio)
       close_stream(oc, &audio_st);

    if (!(fmt->flags & AVFMT_NOFILE))
       /* Close the output file. */
       avio_closep(&oc->pb);

    /* free the stream */
    avformat_free_context(oc);

    error logs

    av_log(s, AV_LOG_ERROR, "muxer does not support non seekable output\n");

    error getting here

    /* Write the stream header, if any. */
    ret = avformat_write_header(oc, &opt);

    how can i fix this error

  • Controlling end time in video player via AVPacket information / setting pts/dts properly

    26 janvier 2017, par SyntheticGio

    I’m currently working in C/C++ using the FFMPEG core (libavcodec, etc.). I’m capturing a stream and writing it in chunks to different files. So imagine the stream is 5 minutes in length and I’m writing five files of one minute in length each. I’m able to do this successfully.

    Currently, each file after the first file has a start time equal to the time it would have been in the un-chunked stream. So the second video file starts at 1 minute, the third starts at 2 minutes, etc. This was inadvertent but as it turns out is beneficial in my particular use case.

    VLC or other video players that I’ve tried report this start time ’properly’, but the end time shows as the duration (not start time + duration). My gut feeling is that the player simply is making the assumption all videos start at 0 and it shows the length as the ’end time’ but I don’t actually know this so I’d like to know if there is anyway to set the AVPacket information so the player for the third video would start at 2 minutes and end at 3 minutes (for a 1 minute length video) - as an example ?

    As an alternative, if I wanted to do this the traditional way (reset each chunk to starting at time 0), I assume I’d normalize the AVPacket.pts and AVPacket.dts by subtracting the values of the final packet in the previous chunk ? This seems like this strategy would work for pts but I’m less sure about it working for dts. I feel like it would generally work for dts but there might be times when this fails, so I’d like to know if this is a safe method (or if there is a better method I should use in this case).