Recherche avancée

Médias (1)

Mot : - Tags -/getid3

Autres articles (105)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (7727)

  • How to set comma delimited values from stdout ?

    28 octobre 2017, par gregm

    I have a batch file that processes the output of an ffprobe query. It retrieves several bits of data that I use to determine some ffmpeg directives. In particular I’m converting h264 videos into h265 if the video frame height is 720 or greater. I also convert the audio stream to aac if it isn’t already and if that stream is higher than 128 kbps I convert it down to 128.

    I can do all of that by calling ffprobe a number of times and use if statements to decide what my ffmpeg command will be.

    I’d like my batch file to be more efficient so I was thinking that if I could take the output of one (maybe two) ffprobe queries and then stick that output into a for /f token=.... loop then I could set each ffprobe data point to a variable and then just check the variables to decide what the resulting ffmpeg command will be.

    Here’s what I have right now to simply check if the video stream is hevc. If it isn’t then ffmpeg converts the video to hevc and copies the audio to aac.

    for %%a in ("*.*") do (
    ffprobe -v quiet -show_entries stream=index,codec_name,height -of csv "%%a" 2>&1 | findstr "hevc"
    if errorlevel 1 (
       ffmpeg.exe -hwaccel cuvid -i "%%a" -pix_fmt p010le -c:v hevc_nvenc -preset slow -rc vbr_hq -b:v 4M -maxrate:v 10M -c:a aac "%%~na.h265-convert.mp4"
    ))

    That ffprobe query output looks like this :

    stream,0,h264,480

    I was thinking if I could tokenize that output with something like :

    for /f "tokens=1,2,3,4 delims= " %%a in ("______") do set codec=%%b&set fheight=%%d

    I don’t know what to put in the spot where I have the _______. I really don’t want to create a temp file unless that’s the only option though.

    1) Is this an efficient way to achieve what I’m trying to do ?

    2) What do I use where I have a blank line above ________ to call the output of the ffprobe query to use in my for loop ?

  • How to save image from the middle of a video ?

    29 octobre 2017, par puppon -su

    I need to make a thumbnail for a video, to seek to the 25% of a video and save the image. Here is what I’m doing right now, but it only saves black image.

    #include

    #include <libavformat></libavformat>avformat.h>
    #include <libavutil></libavutil>dict.h>

    int main (int argc, char **argv)
    {

       av_register_all();

       AVFormatContext *pFormatCtx = avformat_alloc_context();

       int res;

       res = avformat_open_input(&amp;pFormatCtx, "test.mp4", NULL, NULL);
       if (res) {
           return res;
       }


       avformat_find_stream_info(pFormatCtx, NULL);

       int64_t duration = pFormatCtx->duration;


       // Find the first video stream
       int videoStream=-1;
       for(int i=0; inb_streams; i++) {
           if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO) {
               videoStream=i;
               break;
           }
       }
       if(videoStream==-1) {
           return -1;
       }

       AVCodecContext *pCodecCtxOrig = NULL;

       // Get a pointer to the codec context for the video stream
       pCodecCtxOrig=pFormatCtx->streams[videoStream]->codec;


       AVCodec *pCodec = NULL;
       // Find the decoder for the video stream
       pCodec=avcodec_find_decoder(pCodecCtxOrig->codec_id);
       if(pCodec==NULL) {
           fprintf(stderr, "Unsupported codec!\n");
           return -1; // Codec not found
       }


       AVCodecContext *pCodecCtx = NULL;
       // Copy context
       pCodecCtx = avcodec_alloc_context3(pCodec);
       if(avcodec_copy_context(pCodecCtx, pCodecCtxOrig) != 0) {
           fprintf(stderr, "Couldn't copy codec context");
           return -1; // Error copying codec context
       }


       // Open codec
       if(avcodec_open2(pCodecCtx, pCodec, NULL)&lt;0) {
           return -1; // Could not open codec
       }


       AVFrame *pFrame = NULL;

       pFrame=av_frame_alloc();

       AVFrame *pFrameRGB = NULL;

       pFrameRGB=av_frame_alloc();



       // Determine required buffer size and allocate buffer
       int numBytes=avpicture_get_size(AV_PIX_FMT_RGB24, pCodecCtx->width,
                                   pCodecCtx->height);

       uint8_t *buffer = NULL;
       buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));


       // Assign appropriate parts of buffer to image planes in pFrameRGB
       // Note that pFrameRGB is an AVFrame, but AVFrame is a superset
       // of AVPicture
       res = avpicture_fill((AVPicture *)pFrameRGB, buffer, AV_PIX_FMT_RGB24,
                       pCodecCtx->width, pCodecCtx->height);
       if (res&lt;0) {
           return;
       }



       // I've set this number randomly
       res = av_seek_frame(pFormatCtx, videoStream, 20.0, AVSEEK_FLAG_FRAME);
       if (res&lt;0) {
           return;
       }



       AVPacket packet;
       while(1) {
           av_read_frame(pFormatCtx, &amp;packet);
           if(packet.stream_index==videoStream) {
               int frameFinished;
               avcodec_decode_video2(pCodecCtx, pFrame, &amp;frameFinished, &amp;packet);
               if(frameFinished) {
                   SaveFrame(pFrameRGB, pCodecCtx->width,
                       pCodecCtx->height);
                   break;
               }

           }
       }


       avformat_close_input(&amp;pFormatCtx);
       return 0;
    }



    void SaveFrame(AVFrame *pFrame, int width, int height) {
     FILE *pFile;
     char szFilename[] = "frame.ppm";
     int  y;

     // Open file
     pFile=fopen(szFilename, "wb");
     if(pFile==NULL)
       return;

     // Write header
     fprintf(pFile, "P6\n%d %d\n255\n", width, height);

     // Write pixel data
     for(y=0; ydata[0]+y*pFrame->linesize[0], 1, width*3, pFile);

     // Close file
     fclose(pFile);
    }

    I was following this tutorial http://dranger.com/ffmpeg/tutorial01.html http://dranger.com/ffmpeg/tutorial07.html . It says that it was updated in 2015, but there already are some warnings about deprecated code, for example here : pFormatCtx->streams[i]->codec.

    I got video duration (in microseconds), but I don’t understand what I should send to av_seek_frame. Can I somehow use frame number for both duration and seeking, instead of time ?

  • How to detect blue screen of ffmpeg video packet ?

    28 novembre 2017, par 심상원

    Good morning. There is one question about FFMPEG.

    I’m using FFMPEG to study C ++ on Linux.

    When the camera spirituality is RTSP and the format is H.264,

    I would like to determine if the camera image is a blue screen, but the following concepts are confusing.

    1. KeyFrame comes in 1 second or every X seconds cycle. Does the KeyFrame get delivered from the camera even if it is still the same image ?

    2. If the KeyFrame is delivered, is the size of the packet transmitted between the cycles zero ?

    3. If the above method is the same as normal image, should I compare the individual frames after decoding ?

    If you do not have any of these questions, please let me know if you have a good way.

    Thank you.