Recherche avancée

Médias (1)

Mot : - Tags -/epub

Autres articles (65)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

Sur d’autres sites (10009)

  • Adding frames to gifs using FFMPEG

    4 août 2021, par John Smith

    I have the following code

    


    for /f %%i in ('dir /b /a-d %script_id%_tmp_img_*1.png') do (
    ffmpeg -y -v error -i %%i -i %script_id%_tmp_blank_frame.png -filter_complex "overlay" overlayed_%%i
    del %%i
)


    


    It currently Adds a overlay image frame at every 10th frame EX , 1,11,21,31 ect ect anything that ends in *1 any advice how i would change this so instead it adds 8 overlay frames before the gif then 1 after the gif instead ? Any help would be great i am super stuck thanks :)

    


    EDIT Heres the full code

    


    :: Usage composegif.bat <file> [-cut ] [-fps ] [-blank <png file="file">]&#xA;:: The extra frame is a png named "blank_frame_orig.png" or a custom name can be passed as a parameter, &#xA;:: any dimension is ok, it will be resized and overlayed to the original frame&#xA;:: The frame is added as first and every 10 after that, so 1, 11, 21...&#xA;:: If you want to edit that , search "FREQUENCY" in this file and edit the line below&#xA;&#xA;@echo off&#xA;&#xA;setlocal ENABLEDELAYEDEXPANSION&#xA;&#xA;set script_id=%random%&#xA;set max_time=15&#xA;&#xA;if "%1" == "" (&#xA;    echo Select a file to transform&#xA;    exit &#xA;)&#xA;&#xA;if not exist %1 (&#xA;    echo File not found&#xA;    exit&#xA;)&#xA;&#xA;::set params&#xA;&#xA;set input_file=%1&#xA;set framerate=20&#xA;set blank_frame=blank_frame_orig.png&#xA;set cut_sec=%max_time%&#xA;&#xA;:loop&#xA;if not "%2"=="" (&#xA;    if "%2"=="-fps" (&#xA;        set framerate=%3&#xA;        shift&#xA;    )&#xA;    if "%2"=="-cut" (&#xA;        if not "%3"=="" (&#xA;            if "%3" GTR "%max_time%" ( &#xA;                echo Max output is %max_time% seconds&#xA;            ) else ( &#xA;                set cut_sec=%3&#xA;            )&#xA;        ) else (&#xA;            set cut_sec=%max_time%&#xA;        )&#xA;        shift&#xA;    )&#xA;    if "%2"=="-blank" (&#xA;        set blank_frame=%3&#xA;        shift&#xA;    )&#xA;    shift&#xA;    goto :loop&#xA;)&#xA;&#xA;echo Fps set to %framerate%&#xA;echo Cutting gif at %cut_sec% seconds&#xA;&#xA;::extract images&#xA;echo Extracting images&#xA;ffmpeg -v error -i %input_file% -vsync 0 %script_id%_tmp_img_%%03d.png || del %script_id%_tmp_*&#xA;&#xA;::get size from first frame&#xA;for /f %%i in (&#x27;ffprobe.exe -v error -show_entries stream^="width,height" -of csv^=p^=0:s^=\: %script_id%_tmp_img_001.png&#x27;) do (&#xA;    set size=%%i&#xA;)&#xA;&#xA;::resize blank frame&#xA;echo Size is %size%&#xA;ffmpeg -v error -y -i %blank_frame% -vf scale=%size% %script_id%_tmp_blank_frame.png || del %script_id%_tmp_*&#xA;&#xA;::add overlay to frames and removing corresponding ones&#xA;echo Adding overlay to every 10th frame&#xA;:: EDIT THIS TO CHANGE FREQUENCY&#xA;for /f %%i in (&#x27;dir /b /a-d %script_id%_tmp_img_*1.png&#x27;) do (&#xA;    ffmpeg -y -v error -i %%i -i %script_id%_tmp_blank_frame.png -filter_complex "overlay" overlayed_%%i&#xA;    del %%i&#xA;)&#xA;&#xA;::rename overlayed frames&#xA;rename "overlayed_*" "//////////*"&#xA;&#xA;::create gif at &#x27;framerate&#x27; fps&#xA;set finalFileName=%overlayed%_%random%.gif&#xA;echo Creating gif %finalFileName%&#xA;ffmpeg -v error -framerate %framerate% -i %script_id%_tmp_img_%%003d.png -t 00:00:%cut_sec% %finalFileName%  || del %script_id%_tmp_*&#xA;&#xA;&#xA;::delete tmp files&#xA;del %script_id%_tmp_*&#xA;&#xA;echo Done&#xA;</png></file>

    &#xA;

    but this code adds the overlay.png every 10th frame of the gif EX 1,11,21,31,41 ect instead i want it to add 19 of them before the gif then 1 after

    &#xA;

  • ffmpeg audio output in iOS

    19 septembre 2015, par user3249421

    Good day,

    I have own project which using iFrameExtraktor (https://github.com/lajos/iFrameExtractor). I modified initWithVideo method to :

    -(id)initWithVideo:(NSString *)moviePath imgView: (UIImageView *)imgView {
    if (!(self=[super init])) return nil;

    AVCodec         *pCodec;
    AVCodec         *aCodec;

    // Register all formats and codecs
    avcodec_register_all();
    av_register_all();

    imageView = imgView;

    // Open video file
    if(avformat_open_input(&amp;pFormatCtx, [moviePath cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL) != 0) {
       av_log(NULL, AV_LOG_ERROR, "Couldn't open file\n");
       goto initError;
    }

    // Retrieve stream information
    if(avformat_find_stream_info(pFormatCtx,NULL) &lt; 0) {
       av_log(NULL, AV_LOG_ERROR, "Couldn't find stream information\n");
       goto initError;
    }

    // Find the first video stream
    if ((videoStream =  av_find_best_stream(pFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, &amp;pCodec, 0)) &lt; 0) {
       av_log(NULL, AV_LOG_ERROR, "Cannot find a video stream in the input file\n");
       goto initError;
    }

    if((audioStream = av_find_best_stream(pFormatCtx, AVMEDIA_TYPE_AUDIO, -1, -1, &amp;aCodec, 0)) &lt; 0 ){
       av_log(NULL, AV_LOG_ERROR, "Cannot find a audio stream in the input file\n");
       goto initError;
    }

    // Get a pointer to the codec context for the video stream
    pCodecCtx = pFormatCtx->streams[videoStream]->codec;
    aCodecCtx = pFormatCtx->streams[audioStream]->codec;

    // Find the decoder for the video stream
    pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
    if(pCodec == NULL) {
       av_log(NULL, AV_LOG_ERROR, "Unsupported video codec!\n");
       goto initError;
    }

    aCodec = avcodec_find_decoder(aCodecCtx->codec_id);
    if(aCodec == NULL) {
       av_log(NULL, AV_LOG_ERROR, "Unsupported audio codec!\n");
       goto initError;
    }

    // Open codec
    if(avcodec_open2(pCodecCtx, pCodec, NULL) &lt; 0) {
       av_log(NULL, AV_LOG_ERROR, "Cannot open video decoder\n");
       goto initError;
    }

    if(avcodec_open2(aCodecCtx, aCodec, NULL) &lt; 0){
       av_log(NULL, AV_LOG_ERROR, "Cannot open audio decoder\n");
       goto initError;
    }

    // Allocate video frame
    pFrame = av_frame_alloc();

    outputWidth = pCodecCtx->width;
    self.outputHeight = pCodecCtx->height;

    lastFrameTime = -1;
    [self seekTime:0.0];

    return self;

    initError:
       //[self release];
       return nil;
    }

    Video rendering works fine, but I don’t know how play audio to device output.

    Thanks for any tips.

  • Displaying 450 image files from SDCard at 30fps on android

    11 décembre 2013, par nikhilkerala

    I am trying to develop an app that takes a 15 seconds of video, allows the user to apply different filters, shows the preview of the effect, then allows to save the processed video to sdcard. I use ffmpeg to split the video into JPEG frames, apply the desired filter using GPUImage to all the frames, then use ffmpeg to encode the frames back to a video. Everything works fine except the part where user selects the filter. When user selects a filter, the app is supposed to display the preview of the video with the filter applied. Though 450 frames get the filter applied fairly quick, displaying the images sequentially at 30 fps (to make the user feel the video is being played) is performing poorly. I tried different approaches but the maximum frame rate I could attain even on the fastest devices is 10 to 12 fps.

    The AnimationDrawable technique doesn't work in this case because it requires the entire images to be buffered into memory which in this case is huge. App crashes.

    The below code is the best performing one so far (10 to 12 fps).

    package com.example.animseqvideo;
    import ......

    public class MainActivity extends Activity {
       Handler handler;
       Runnable runnable;
       final int interval = 33; // 30.30 FPS
       ImageView myImage;
       int i=0;

       @Override
       protected void onCreate(Bundle savedInstanceState) {
           super.onCreate(savedInstanceState);
           setContentView(R.layout.activity_main);

           myImage = (ImageView) findViewById(R.id.imageView1);

           handler = new Handler();
           runnable = new Runnable(){
               public void run() {

                   i++;  if(i>450)i=1;

                   File imgFile = new  File(Environment.getExternalStorageDirectory().getPath() + "/com.example.animseqvideo/image"+ String.format("%03d", i)   +".jpg");
                   if(imgFile.exists()){
                       Bitmap myBitmap = BitmapFactory.decodeFile(imgFile.getAbsolutePath());
                       myImage.setImageBitmap(myBitmap);
                   }
    //SOLUTION EDIT - MOVE THE BELOW LINE OF CODE AS THE FIRST LINE OF run() AND FPS=30 !!!

                   handler.postDelayed(runnable, interval);
               }
           };
           handler.postAtTime(runnable, System.currentTimeMillis()+interval);
           handler.postDelayed(runnable, interval);
       }
    }

    I understand that the process of getting an image from SDCard, decoding it, then displaying it onto the screen involves the performance of the SDCard reading, the CPUs performance and graphics performance of the device. But I am wondering if there is a way I could save a few milliseconds in each iteration. Any suggestion would be of great help at this point.