
Recherche avancée
Médias (1)
-
The Great Big Beautiful Tomorrow
28 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Texte
Autres articles (65)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (10009)
-
Adding frames to gifs using FFMPEG
4 août 2021, par John SmithI have the following code


for /f %%i in ('dir /b /a-d %script_id%_tmp_img_*1.png') do (
 ffmpeg -y -v error -i %%i -i %script_id%_tmp_blank_frame.png -filter_complex "overlay" overlayed_%%i
 del %%i
)



It currently Adds a overlay image frame at every 10th frame EX , 1,11,21,31 ect ect anything that ends in *1 any advice how i would change this so instead it adds 8 overlay frames before the gif then 1 after the gif instead ? Any help would be great i am super stuck thanks :)


EDIT Heres the full code


:: Usage composegif.bat <file> [-cut ] [-fps ] [-blank <png file="file">]
:: The extra frame is a png named "blank_frame_orig.png" or a custom name can be passed as a parameter, 
:: any dimension is ok, it will be resized and overlayed to the original frame
:: The frame is added as first and every 10 after that, so 1, 11, 21...
:: If you want to edit that , search "FREQUENCY" in this file and edit the line below

@echo off

setlocal ENABLEDELAYEDEXPANSION

set script_id=%random%
set max_time=15

if "%1" == "" (
 echo Select a file to transform
 exit 
)

if not exist %1 (
 echo File not found
 exit
)

::set params

set input_file=%1
set framerate=20
set blank_frame=blank_frame_orig.png
set cut_sec=%max_time%

:loop
if not "%2"=="" (
 if "%2"=="-fps" (
 set framerate=%3
 shift
 )
 if "%2"=="-cut" (
 if not "%3"=="" (
 if "%3" GTR "%max_time%" ( 
 echo Max output is %max_time% seconds
 ) else ( 
 set cut_sec=%3
 )
 ) else (
 set cut_sec=%max_time%
 )
 shift
 )
 if "%2"=="-blank" (
 set blank_frame=%3
 shift
 )
 shift
 goto :loop
)

echo Fps set to %framerate%
echo Cutting gif at %cut_sec% seconds

::extract images
echo Extracting images
ffmpeg -v error -i %input_file% -vsync 0 %script_id%_tmp_img_%%03d.png || del %script_id%_tmp_*

::get size from first frame
for /f %%i in ('ffprobe.exe -v error -show_entries stream^="width,height" -of csv^=p^=0:s^=\: %script_id%_tmp_img_001.png') do (
 set size=%%i
)

::resize blank frame
echo Size is %size%
ffmpeg -v error -y -i %blank_frame% -vf scale=%size% %script_id%_tmp_blank_frame.png || del %script_id%_tmp_*

::add overlay to frames and removing corresponding ones
echo Adding overlay to every 10th frame
:: EDIT THIS TO CHANGE FREQUENCY
for /f %%i in ('dir /b /a-d %script_id%_tmp_img_*1.png') do (
 ffmpeg -y -v error -i %%i -i %script_id%_tmp_blank_frame.png -filter_complex "overlay" overlayed_%%i
 del %%i
)

::rename overlayed frames
rename "overlayed_*" "//////////*"

::create gif at 'framerate' fps
set finalFileName=%overlayed%_%random%.gif
echo Creating gif %finalFileName%
ffmpeg -v error -framerate %framerate% -i %script_id%_tmp_img_%%003d.png -t 00:00:%cut_sec% %finalFileName% || del %script_id%_tmp_*


::delete tmp files
del %script_id%_tmp_*

echo Done
</png></file>


but this code adds the overlay.png every 10th frame of the gif EX 1,11,21,31,41 ect instead i want it to add 19 of them before the gif then 1 after


-
ffmpeg audio output in iOS
19 septembre 2015, par user3249421Good day,
I have own project which using iFrameExtraktor (https://github.com/lajos/iFrameExtractor). I modified initWithVideo method to :
-(id)initWithVideo:(NSString *)moviePath imgView: (UIImageView *)imgView {
if (!(self=[super init])) return nil;
AVCodec *pCodec;
AVCodec *aCodec;
// Register all formats and codecs
avcodec_register_all();
av_register_all();
imageView = imgView;
// Open video file
if(avformat_open_input(&pFormatCtx, [moviePath cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL) != 0) {
av_log(NULL, AV_LOG_ERROR, "Couldn't open file\n");
goto initError;
}
// Retrieve stream information
if(avformat_find_stream_info(pFormatCtx,NULL) < 0) {
av_log(NULL, AV_LOG_ERROR, "Couldn't find stream information\n");
goto initError;
}
// Find the first video stream
if ((videoStream = av_find_best_stream(pFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, &pCodec, 0)) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot find a video stream in the input file\n");
goto initError;
}
if((audioStream = av_find_best_stream(pFormatCtx, AVMEDIA_TYPE_AUDIO, -1, -1, &aCodec, 0)) < 0 ){
av_log(NULL, AV_LOG_ERROR, "Cannot find a audio stream in the input file\n");
goto initError;
}
// Get a pointer to the codec context for the video stream
pCodecCtx = pFormatCtx->streams[videoStream]->codec;
aCodecCtx = pFormatCtx->streams[audioStream]->codec;
// Find the decoder for the video stream
pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
if(pCodec == NULL) {
av_log(NULL, AV_LOG_ERROR, "Unsupported video codec!\n");
goto initError;
}
aCodec = avcodec_find_decoder(aCodecCtx->codec_id);
if(aCodec == NULL) {
av_log(NULL, AV_LOG_ERROR, "Unsupported audio codec!\n");
goto initError;
}
// Open codec
if(avcodec_open2(pCodecCtx, pCodec, NULL) < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot open video decoder\n");
goto initError;
}
if(avcodec_open2(aCodecCtx, aCodec, NULL) < 0){
av_log(NULL, AV_LOG_ERROR, "Cannot open audio decoder\n");
goto initError;
}
// Allocate video frame
pFrame = av_frame_alloc();
outputWidth = pCodecCtx->width;
self.outputHeight = pCodecCtx->height;
lastFrameTime = -1;
[self seekTime:0.0];
return self;
initError:
//[self release];
return nil;
}Video rendering works fine, but I don’t know how play audio to device output.
Thanks for any tips.
-
Displaying 450 image files from SDCard at 30fps on android
11 décembre 2013, par nikhilkeralaI am trying to develop an app that takes a 15 seconds of video, allows the user to apply different filters, shows the preview of the effect, then allows to save the processed video to sdcard. I use ffmpeg to split the video into JPEG frames, apply the desired filter using GPUImage to all the frames, then use ffmpeg to encode the frames back to a video. Everything works fine except the part where user selects the filter. When user selects a filter, the app is supposed to display the preview of the video with the filter applied. Though 450 frames get the filter applied fairly quick, displaying the images sequentially at 30 fps (to make the user feel the video is being played) is performing poorly. I tried different approaches but the maximum frame rate I could attain even on the fastest devices is 10 to 12 fps.
The AnimationDrawable technique doesn't work in this case because it requires the entire images to be buffered into memory which in this case is huge. App crashes.
The below code is the best performing one so far (10 to 12 fps).
package com.example.animseqvideo;
import ......
public class MainActivity extends Activity {
Handler handler;
Runnable runnable;
final int interval = 33; // 30.30 FPS
ImageView myImage;
int i=0;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
myImage = (ImageView) findViewById(R.id.imageView1);
handler = new Handler();
runnable = new Runnable(){
public void run() {
i++; if(i>450)i=1;
File imgFile = new File(Environment.getExternalStorageDirectory().getPath() + "/com.example.animseqvideo/image"+ String.format("%03d", i) +".jpg");
if(imgFile.exists()){
Bitmap myBitmap = BitmapFactory.decodeFile(imgFile.getAbsolutePath());
myImage.setImageBitmap(myBitmap);
}
//SOLUTION EDIT - MOVE THE BELOW LINE OF CODE AS THE FIRST LINE OF run() AND FPS=30 !!!
handler.postDelayed(runnable, interval);
}
};
handler.postAtTime(runnable, System.currentTimeMillis()+interval);
handler.postDelayed(runnable, interval);
}
}I understand that the process of getting an image from SDCard, decoding it, then displaying it onto the screen involves the performance of the SDCard reading, the CPUs performance and graphics performance of the device. But I am wondering if there is a way I could save a few milliseconds in each iteration. Any suggestion would be of great help at this point.