
Recherche avancée
Autres articles (48)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs
Sur d’autres sites (5695)
-
How to convert 7sec Video to Gif Android
21 août 2014, par Donnie IbiyemiAm making an app that convert shot video clips to Gif. i was wondering if there was a libary that directly converts videos to gifs on the Fly on Android.
I’ve tried to extract all the frames of the video and encode them into a gif but am only getting the the first frame of the video. my code is below :
public static final int GIF_DELAY_BETWEEN_FRAMES = 100;
public static final float SPEED_RATIO = 1.1f;
Uri videoFileUri=Uri.parse(mOutputFile.toString());
FFmpegMediaMetadataRetriever retriever = new FFmpegMediaMetadataRetriever();
retriever.setDataSource(mOutputFile.getAbsolutePath());
rev = new ArrayList<bitmap>();
MediaPlayer mp = MediaPlayer.create(RecorderActivity.this, videoFileUri);
int millis = mp.getDuration();
for(int i=1000000;itest.gif");
outStream.write(generateGIF());
outStream.close();
}catch(Exception e){
e.printStackTrace();
}
public byte[] generateGIF() {
ByteArrayOutputStream bos = new ByteArrayOutputStream();
AnimatedGifEncoder encoder = new AnimatedGifEncoder();
encoder.setDelay((int)(GIF_DELAY_BETWEEN_FRAMES * (1 / SPEED_RATIO)));
encoder.start(bos);
for (Bitmap bitmap : rev) {
encoder.addFrame(bitmap);
}
encoder.finish();
return bos.toByteArray();
}
</bitmap>Please help me guys...thanks
-
FFMPEG with moviepy
5 novembre 2023, par Shenhav MorI'm working on something that concatenate videos and adds some titles on through moviepy.


As I saw on the web and on my on pc moviepy works on the CPU and takes a lot of time to save(render) a movie. Is there a way to improve the speed by running the writing of moviepy on GPU ? Like using FFmpeg or something like this ?


I didn't find an answer to that on the web, so I hope that some of you can help me.
I tried using
thread=4
andthread=16
but they are still very very slow and didn't change much.

My CPU is very strong (i7 10700k), but still, rendering on moviepy takes me for a compilation with a total of 8 minutes 40 seconds, which is a lot.


Any ideas ?Thanks !
the code doesnt realy matter but :


def Edit_Clips(self):

 clips = []

 time=0.0
 for i,filename in enumerate(os.listdir(self.path)):
 if filename.endswith(".mp4"):
 tempVideo=VideoFileClip(self.path + "\\" + filename)

 txt = TextClip(txt=self.arrNames[i], font='Amiri-regular',
 color='white', fontsize=70)
 txt_col = txt.on_color(size=(tempVideo.w + txt.w, txt.h - 10),
 color=(0, 0, 0), pos=(6, 'center'), col_opacity=0.6)

 w, h = moviesize = tempVideo.size
 txt_mov = txt_col.set_pos(lambda t: (max(w / 30, int(w - 0.5 * w * t)),
 max(5 * h / 6, int(100 * t))))

 sub=txt_mov.subclip(time,time+4)
 time = time + tempVideo.duration

 final=CompositeVideoClip([tempVideo,sub])

 clips.append(final)

 video = concatenate_videoclips(clips, method='compose')
 print("after")
 video.write_videofile(self.targetPath+"\\"+'test.mp4',threads=16,audio_fps=44100,codec = 'libx264')



-
Cut, concatenate, and re-encode to h265 with ffmpeg
30 mars 2022, par Make42I have two h264 videos that I would like to cut (each), concatenate and re-code into h265 - all with ffmpeg. How can I achieve that, considering that the following two approaches do not work ?


First approach


I tried


ffmpeg -ss 00:00:05.500 -to 00:12:06.200 -i video1.mp4 \
 -ss 00:00:10.700 -to 01:43:47.000 -i video2.mp4 \
 -filter_complex "[0:v][0:a][1:v][1:a] concat=n=2:v=1:a=1 [outv] [outa]" \
 -map "[outv]" -map "[outa]" \
 -c:v libx265 -vtag hvc1 -c:a copy \
 final.mp4



but get the error message




Streamcopy requested for output stream 0:1, which is fed from a complex filtergraph. Filtering and streamcopy cannot be used together.




Second approach


Alternatively, I created the file
cutpoints.txt
with the content

file video1.mp4
inpoint 5.5
outpoint 726.2
file video2.mp4
inpoint 600.7
outpoint 6227.0



and ran the command


ffmpeg -f concat -safe 0 -i cutpoints.txt -c:v libx265 -vtag hvc1 -c:a copy final.mp4



but then the video does not start exactly at 5.5 seconds, which is not surprising since




inpoint timestamp


This directive works best with intra frame codecs, because for non-intra frame ones you will usually get extra packets before the actual In point and the decoded content will most likely contain frames before In point too.