Recherche avancée

Médias (91)

Autres articles (111)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

  • Le plugin : Podcasts.

    14 juillet 2010, par

    Le problème du podcasting est à nouveau un problème révélateur de la normalisation des transports de données sur Internet.
    Deux formats intéressants existent : Celui développé par Apple, très axé sur l’utilisation d’iTunes dont la SPEC est ici ; Le format "Media RSS Module" qui est plus "libre" notamment soutenu par Yahoo et le logiciel Miro ;
    Types de fichiers supportés dans les flux
    Le format d’Apple n’autorise que les formats suivants dans ses flux : .mp3 audio/mpeg .m4a audio/x-m4a .mp4 (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (9968)

  • FFmpeg concat images and audio what can't control framerate

    27 septembre 2016, par kensonLiang

    Good day !
    I want to concat images and audios to a video.and I found the command like that :

    ffmpeg -ss 00:00:00 -t 00:00:10 -i a.mp3 -ss 00:01:01 -t 00:00:10 -i b.mp3 -thread_queue_size 1024 -r 3 -i C:\f1\im\%08d.jpg -filter_complex "[0:a] [1:a] concat=v=0:a=1 [outa];movie=im/%08d.jpg,fps=3 [img]" -map [outa] -map [img] output.mp4

    It’s work.but whatever I set the fps and -r,the video stream will complete in 1second,and the audio will playing with the last frame of the video after the first second.
    help me,please !

  • Decode video with CUDA nccuvid and ffmpeg [closed]

    25 avril 2013, par Oleksandr Kyrpa

    *strong text*I starting to implement custum video decoder that utilize cuda HW decoder to generate YUV frame for next to encode it.

    How can I fill "CUVIDPICPARAMS" struc ???
    Is it possible ?

    My algorithm are :

    For get video stream packet I'm use ffmpeg-dev libs avcodec, avformat...

    My steps :

    1) Open input file :

    avformat_open_input(&ff_formatContext,in_filename,nullptr,nullptr);

    2) Get video stream property's :

    avformat_find_stream_info(ff_formatContext,nullptr);

    3) Get video stream :

    ff_video_stream=ff_formatContext->streams[i];

    4) Get CUDA device and init it :

    cuDeviceGet(&cu_device,0);
    CUcontext cu_vid_ctx;

    5) Init video CUDA decoder and set create params :

    CUVIDDECODECREATEINFO *cu_decoder_info=new CUVIDDECODECREATEINFO;
    memset(cu_decoder_info,0,sizeof(CUVIDDECODECREATEINFO));
    ...
    cuvidCreateDecoder(cu_video_decoder,cu_decoder_info);

    6)Read frame data to AVpacket

    av_read_frame(ff_formatContext,ff_packet);

    AND NOW I NEED decode frame packet on CUDA video decoder, in theoretical are :

    cuvidDecodePicture(pDecoder,&picParams);

    BUT before I need fill CUVIDPICPARAMS

    CUVIDPICPARAMS picParams ;//=new CUVIDPICPARAMS ;
    memset(&picParams, 0, sizeof(CUVIDPICPARAMS)) ;

    HOW CAN I FILL "CUVIDPICPARAMS" struc ???

    typedef struct _CUVIDPICPARAMS
    {
       int PicWidthInMbs;      // Coded Frame Size
       int FrameHeightInMbs;   // Coded Frame Height
       int CurrPicIdx;         // Output index of the current picture
       int field_pic_flag;     // 0=frame picture, 1=field picture
       int bottom_field_flag;  // 0=top field, 1=bottom field (ignored if field_pic_flag=0)
       int second_field;       // Second field of a complementary field pair
       // Bitstream data
       unsigned int nBitstreamDataLen;        // Number of bytes in bitstream data buffer
       const unsigned char *pBitstreamData;   // Ptr to bitstream data for this picture (slice-layer)
       unsigned int nNumSlices;               // Number of slices in this picture
       const unsigned int *pSliceDataOffsets; // nNumSlices entries, contains offset of each slice within the bitstream data buffer
       int ref_pic_flag;       // This picture is a reference picture
       int intra_pic_flag;     // This picture is entirely intra coded
       unsigned int Reserved[30];             // Reserved for future use
       // Codec-specific data
       union {
           CUVIDMPEG2PICPARAMS mpeg2;          // Also used for MPEG-1
           CUVIDH264PICPARAMS h264;
           CUVIDVC1PICPARAMS vc1;
           CUVIDMPEG4PICPARAMS mpeg4;
           CUVIDJPEGPICPARAMS jpeg;
           unsigned int CodecReserved[1024];
       } CodecSpecific;
    } CUVIDPICPARAMS;

    typedef struct _CUVIDH264PICPARAMS
    {
       // SPS
       int log2_max_frame_num_minus4;
       int pic_order_cnt_type;
       int log2_max_pic_order_cnt_lsb_minus4;
       int delta_pic_order_always_zero_flag;
       int frame_mbs_only_flag;
       int direct_8x8_inference_flag;
       int num_ref_frames;             // NOTE: shall meet level 4.1 restrictions
       unsigned char residual_colour_transform_flag;
       unsigned char bit_depth_luma_minus8;    // Must be 0 (only 8-bit supported)
       unsigned char bit_depth_chroma_minus8;  // Must be 0 (only 8-bit supported)
       unsigned char qpprime_y_zero_transform_bypass_flag;
       // PPS
       int entropy_coding_mode_flag;
       int pic_order_present_flag;
       int num_ref_idx_l0_active_minus1;
       int num_ref_idx_l1_active_minus1;
       int weighted_pred_flag;
       int weighted_bipred_idc;
       int pic_init_qp_minus26;
       int deblocking_filter_control_present_flag;
       int redundant_pic_cnt_present_flag;
       int transform_8x8_mode_flag;
       int MbaffFrameFlag;
       int constrained_intra_pred_flag;
       int chroma_qp_index_offset;
       int second_chroma_qp_index_offset;
       int ref_pic_flag;
       int frame_num;
       int CurrFieldOrderCnt[2];
       // DPB
       CUVIDH264DPBENTRY dpb[16];          // List of reference frames within the DPB
       // Quantization Matrices (raster-order)
       unsigned char WeightScale4x4[6][16];
       unsigned char WeightScale8x8[2][64];
       // FMO/ASO
       unsigned char fmo_aso_enable;
       unsigned char num_slice_groups_minus1;
       unsigned char slice_group_map_type;
       signed char pic_init_qs_minus26;
       unsigned int slice_group_change_rate_minus1;
       union
       {
           unsigned long long slice_group_map_addr;
           const unsigned char *pMb2SliceGroupMap;
       } fmo;
       unsigned int  Reserved[12];
       // SVC/MVC
       union
       {
           CUVIDH264MVCEXT mvcext;
           CUVIDH264SVCEXT svcext;
       };
    } CUVIDH264PICPARAMS;

    How can I fill "CUVIDPICPARAMS" struc ???
    Is it possible ?

  • VLC snytax to transcode & stream to stdout ?

    4 octobre 2016, par Will Tower

    Goal : I am trying to use VLC as a local server to expand the video capabilities of an app created with Adobe AIR, Flex and Actionscript. I am using VLC to stream to stdoutand reading that output from within my app.

    VLC Streaming capabilities
    VLC Flash Video

    Status : I am able to launch VLC as a background process and control it through its remote control interface (more detail). I can load, transcode and stream a local video file. The example app below is a barebones testbed demonstrating this.

    Issue : I am getting data in to my app but it is not rendering as video. I don’t know if it is a problem with my VLC commands or with writing to/reading from stdout. This technique of reading from stdout in AIR works (with ffmpeg for example).

    One of the various transcoding commands I have tried :

    -I rc  // remote control interface  
    -vvv   // verbose debuging  
    --sout  // transcode, stream to stdout
    "#transcode{vcodec=FLV1}:std{access=file,mux=ffmpeg{mux=flv},dst=-}"

    This results in data coming into to my app but for some reason it is not rendering as video when using appendBytes with the NetStream instance.

    If instead I write the data to an .flv file, a valid file is created – so the broken part seems to be writing it to stdout. One thing I have noticed : I am not getting metadata through the stdout`method. If I play the file created with the command below, I do see metadata.

    Hoping someone sees where I am going wrong here.


    // writing to a file
    var output:File = File.desktopDirectory.resolvePath("stream.flv");
    var outputPath:String = output.nativePath;
    "#transcode{vcodec=FLV1}:std{access=file,mux=ffmpeg{mux=flv},dst=" + outputPath + "}");

    Note : In order to get this to work in AIR, you need to define the app profile as "extendedDesktop"


     <?xml version="1.0" encoding="utf-8"?>