Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (52)

  • Submit bugs and patches

    13 avril 2011

    Unfortunately a software is never perfect.
    If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
    If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
    You may also (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (11660)

  • X264 : How to access NAL units from encoder ?

    18 avril 2014, par user1884325

    When I call

    frame_size = x264_encoder_encode(encoder, &nals, &i_nals, &pic_in, &pic_out);

    and subsequently write each NAL to a file like this :

        if (frame_size >= 0)
        {
           int i;
           int j;

           for (i = 0; i < i_nals; i++)
           {
              printf("******************* NAL %d (%d bytes) *******************\n", i, nals[i].i_payload);
              fwrite(&(nals[i].p_payload[0]), 1, nals[i].i_payload, fid);
           }
        }

    then I get this

    Beginning of NAL file

    My questions are :

    1) Is it normal that there’s readable parameters in the beginning of the file ?

    2) How do I configure the X264 encoder so that the encoder returns frames that I can send via UDP without the packet getting fragmented (size must be below 1390 or somewhere around that).

    3) With the x264.exe I pass in these options :

    "--threads 1 --profile baseline --level 3.2 --preset ultrafast --bframes 0 --force-cfr --no-mbtree --sync-lookahead 0 --rc-lookahead 0 --keyint 1000 --intra-refresh"

    How do I map those to the settings in the X264 parameters structure ? (x264_param_t)

    4) I have been told that the x264 static library doesn’t support bitmap input to the encoder and that I have to use libswscale for conversion of the 24bit RGB input bitmap to YUV2. The encoder, supposedly, only takes YUV2 as input ? Is this true ? If so, how do I build libswscale for the x264 static library ?

  • convert multi file in multi folder with ffmpeg

    6 avril 2020, par SaeiD

    i want to use ffmpeg for convert all file in multi folder
for example
i want to convert all audio on more then 170 folder with ffmpeg at once

    



    ..\voice\SP_WL6_kismet1_a_LOC_INT\snd_vo_SP_WL_wav
..\voice\SP_WL6_kismet1_a_LOC_INT\ed_vo_SP_WL_wav
....
....
....


    



    This folder also contains files in other formats

    



    on these folders i have more then 1000, ogg file i want to convert all of them to wav at once

    


  • record rtsp stream to file(muxing)

    11 avril 2014, par user3521863
    AVFormatContext *g_oc = NULL;
    AVStream *g_in_audio_st, *g_in_video_st;
    AVStream *g_out_audio_st, *g_out_video_st;
    int audio_pts = 0, video_pts = 0, audio_dts = 0, video_dts = 0;
    int last_video_pts = 0;
    AVPacket outpkt, *av_pkt;

    // initialize video codec
    static void init_video_codec(AVFormatContext *context) {
       LOGI(1, "enter init_video_codec");
       AVFormatContext *in_format_ctx = NULL;
       AVCodecContext *avcodec_ctx = NULL;
       int fps = 0;

       if(context->streams[1]->r_frame_rate.num != AV_NOPTS_VALUE &&
               context->streams[1]->r_frame_rate.den != 0)
           fps = context->streams[1]->r_frame_rate.num / context->streams[1]->r_frame_rate.den;
       else
           fps = 25;

       g_out_video_st = avformat_new_stream(g_oc, context->streams[1]->codec->codec);
       LOGI(1, "video avformat_new_stream");
       if( g_out_video_st == NULL ) {
           LOGE(1, "Fail to Allocate Output Video Stream");
           return ;
       }
       else {
           LOGI(1, "Allocated Video Stream");
           if( avcodec_copy_context(g_out_video_st->codec, context->streams[1]->codec) != 0 ) {
               LOGE(1, "Failed to video Copy Context");

               return ;
           }
           else {
               LOGI(1, "Success to video Copy Context");
    // how to setting video stream parameter?

               g_out_video_st->sample_aspect_ratio.den = g_in_video_st->codec->sample_aspect_ratio.den;
               g_out_video_st->sample_aspect_ratio.num = g_in_video_st->codec->sample_aspect_ratio.num;
               g_out_video_st->codec->codec_id         = g_in_video_st->codec->codec->id;
               g_out_video_st->codec->time_base.num    = 1;
               g_out_video_st->codec->time_base.den    = fps * (g_in_video_st->codec->ticks_per_frame);
               g_out_video_st->time_base.num           = 1;
               g_out_video_st->time_base.den           = 1000;
               g_out_video_st->r_frame_rate.num        = fps;
               g_out_video_st->r_frame_rate.den        = 1;
               g_out_video_st->avg_frame_rate.den      = 1;
               g_out_video_st->avg_frame_rate.num      = fps;
               g_out_video_st->codec->width            = g_frame_width;
               g_out_video_st->codec->height           = g_frame_height;
               g_out_video_st->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
           }
       }

       LOGI(1, "end video init");
    }

    // initialize audio codec
    static void init_audio_codec(AVFormatContext *context) {
       LOGI(1, "enter init_audio_codec");
       AVFormatContext *in_format_ctx = NULL;
       AVCodecContext *avcodec_ctx = NULL;

       g_out_audio_st = avformat_new_stream(g_oc, context->streams[0]->codec->codec);
       LOGI(1, "audio avformat_new_stream");
       if( avcodec_copy_context(g_out_audio_st->codec, context->streams[0]->codec) != 0 ) {
           LOGE(1, "Failed to Copy audio Context");

           return ;
       }
       else {
           LOGI(1, "Success to Copy audio Context");
    // how to setting video stream parameter?
           g_out_audio_st->codec->codec_id         = g_in_audio_st->codec->codec_id;
           g_out_audio_st->codec->codec_tag        = 0;
           g_out_audio_st->pts                     = g_in_audio_st->pts;
           g_out_audio_st->time_base.num           = g_in_audio_st->time_base.num;
           g_out_audio_st->time_base.den           = g_in_audio_st->time_base.den;
           g_out_audio_st->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
       }

       LOGI(1, "end init audio");
    }

    // write video stream
    static void write_video_stream(AVPacket *pkt) {
       av_pkt = NULL;
       av_pkt = pkt;

       if( pkt == NULL || sizeof(*pkt) == 0 )
           return;

       av_rescale_q(av_pkt->pts, g_in_video_st->time_base, g_in_video_st->codec->time_base);
       av_rescale_q(av_pkt->dts, g_in_video_st->time_base, g_in_video_st->codec->time_base);

       av_init_packet(&outpkt);

       if( av_pkt->pts != AV_NOPTS_VALUE ) {
           if( last_video_pts == video_pts ) {
               video_pts++;
               last_video_pts = video_pts;
           }
           outpkt.pts = video_pts;
       }
       else {
           outpkt.pts = AV_NOPTS_VALUE;
       }

       if( av_pkt->dts == AV_NOPTS_VALUE )
           outpkt.dts = AV_NOPTS_VALUE;
       else
           outpkt.dts = video_pts;

       outpkt.data = av_pkt->data;
       outpkt.size = av_pkt->size;
       outpkt.stream_index = av_pkt->stream_index;
       outpkt.flags |= AV_PKT_FLAG_KEY;
       last_video_pts = video_pts;

       if(av_interleaved_write_frame(g_oc, &outpkt) < 0) {
    //  if(av_write_frame(g_oc, &outpkt) < 0) {
           LOGE(1, "Failed Video Write");
       }
       else {
           g_out_video_st->codec->frame_number++;
       }

       if( !&outpkt || sizeof(outpkt) == 0 )
           return;
       if( !av_pkt || sizeof(*av_pkt) == 0 )
           return;

       av_free_packet(&outpkt);
    }

    // write audio stream
    static void write_audio_stream(AVPacket *pkt) {
       av_pkt = NULL;
       av_pkt = pkt;

       if( pkt == NULL || sizeof(*pkt) == 0 )
               return;

       av_rescale_q(av_pkt->pts, g_in_audio_st->time_base, g_in_audio_st->codec->time_base);
       av_rescale_q(av_pkt->dts, g_in_audio_st->time_base, g_in_audio_st->codec->time_base);

       av_init_packet(&outpkt);

       if(av_pkt->pts != AV_NOPTS_VALUE)
           outpkt.pts = audio_pts;
       else
           outpkt.pts = AV_NOPTS_VALUE;

       if(av_pkt->dts == AV_NOPTS_VALUE)
           outpkt.dts = AV_NOPTS_VALUE;
       else {
           outpkt.dts = audio_pts;

           if( outpkt.pts >= outpkt.dts)
               outpkt.dts = outpkt.pts;

           if(outpkt.dts == audio_dts)
               outpkt.dts++;

           if(outpkt.pts < outpkt.dts) {
               outpkt.pts = outpkt.dts;
               audio_pts = outpkt.pts;
           }

           outpkt.data = av_pkt->data;
           outpkt.size = av_pkt->size;
           outpkt.stream_index = av_pkt->stream_index;
           outpkt.flags |= AV_PKT_FLAG_KEY;
           video_pts = audio_pts;
           audio_pts++;

           if( av_interleaved_write_frame(g_oc, &outpkt) < 0 ) {
    //      if( av_write_frame(g_oc, &outpkt) < 0 ) {
               LOGE(1, "Failed Audio Write");
           }
           else {
               g_out_audio_st->codec->frame_number++;
           }

           if( !&outpkt || sizeof(outpkt) == 0 )
               return;
           if( !av_pkt || sizeof(*av_pkt) == 0 )
               return;

           av_free_packet(&outpkt);
       }
    }

    here result : recorded file
    here full source : player.c

    I want to record rtsp stream to file on playing
    i try tested video and audio streams while changing the parameters
    but this result file does not match sync between video and audio
    i try search about ffmpeg but almost command run or video recording was only.
    please advice me.