
Recherche avancée
Médias (3)
-
Elephants Dream - Cover of the soundtrack
17 octobre 2011, par
Mis à jour : Octobre 2011
Langue : English
Type : Image
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Publier une image simplement
13 avril 2011, par ,
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (98)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...)
Sur d’autres sites (7428)
-
Flutter : FFmpeg Not Executing Filter
5 novembre 2022, par Dennis AshfordI am downloading a video from Firebase and then trying to apply a watermark to that video that will then be saved in a temporary directory in the cache. I am using the
ffmpeg_kit_flutter
package to do this. There is very little online about how this should work in Flutter.

The video and image are loaded and stored in the cache properly. However, the
FFmpegKit.execute
call is not working and does not create a new output file in the cache. Any suggestions on how to make that function execute ? Are the FFmpeg commands correct ?

Future<string> waterMarkVideo(String videoPath, String watermarkPath) async {
 //these calls are to load the video into temporary directory
 final response = await http.get(Uri.parse(videoPath));
 final originalVideo = File ('${(await getTemporaryDirectory()).path}/video.mp4');
 await originalVideo.create(recursive: true);
 await originalVideo.writeAsBytes(response.bodyBytes);
 print('video path' + originalVideo.path);

 //this grabs the watermark image from assets and decodes it
 final byteData = await rootBundle.load(watermarkPath);
 final watermark = File('${(await getTemporaryDirectory()).path}/image.png');
 await watermark.create(recursive: true);
 await watermark.writeAsBytes(byteData.buffer.asUint8List(byteData.offsetInBytes, byteData.lengthInBytes));
 print('watermark path' + watermark.path);

 //this creates temporary directory for new watermarked video
 var tempDir = await getTemporaryDirectory();
 final newVideoPath = '${tempDir.path}/${DateTime.now().microsecondsSinceEpoch}result.mp4';

 //and now attempting to work with ffmpeg package to overlay watermark on video
 await FFmpegKit.execute("-i $originalVideo -i $watermark -filter_complex 'overlay[out]' -map '[out]' $newVideoPath")
 .then((rc) => print('FFmpeg process exited with rc $rc'));
 print('new video path' + newVideoPath);

return newVideoPath;
 }
</string>


The logs give the file paths and also give this


FFmpegKitFlutterPlugin 0x600000000fe0 started listening to events on 0x600001a4b280.
flutter: Loaded ffmpeg-kit-flutter-ios-https-x86_64-4.5.1.
flutter: FFmpeg process exited with rc Instance of 'FFmpegSession'



-
Xuggler can't open IContainer of icecast server [Webm live video stream]
9 juin 2016, par Roy BeanI’m trying to stream a live webm stream.
I tested some server and Icecast is my pic.
With ffmpeg capturing from an IP camera and publishing in icecast server I’m able to see video in html5
using this command :
ffmpeg.exe -rtsp_transport tcp -i "rtsp ://192.168.230.121/profile ?token=media_profile1&SessionTimeout=60" -f webm -r 20 -c:v libvpx -b:v 3M -s 300x200 -acodec none -content_type video/webm -crf 63 -g 0 icecast ://source:hackme@192.168.0.146:8001/test
I’m using java and tryed to make this with xuggler, but I’m getting an error when opening the stream
final String urlOut = "icecast://source:hackme@192.168.0.146:8001/agora.webm";
final IContainer outContainer = IContainer.make();
final IContainerFormat outContainerFormat = IContainerFormat.make();
outContainerFormat.setOutputFormat("webm", urlOut, "video/webm");
int rc = outContainer.open(urlOut, IContainer.Type.WRITE, outContainerFormat);
if(rc>=0) {
}else {
Logger.getLogger(WebmPublisher.class.getName()).log(Level.INFO, "Fail to open Container " + IError.make(rc));
}Any help ?
I’m getting the error -2 :
Error : could not open file (../../../../../../../csrc/com/xuggle/xuggler/Container.cpp:544)It’s is also very importatn to set the content type as video/webm because icecast by default set the mime type to audio/mpeg
-
ffmpeg create thumbnail image of multiple images from video
21 janvier 2018, par Michael YousefSo I want to create a thumbnail image that consists of multiple images from a single video. I’m looking to make them 4x8, and I want to spread out the images uniformly throughout the video.
Ideally the final product should show 32 image captures, all downscaled to a reasonable size so they can fit on screen at the same time. Also, I’d like to have the final product have some overhead text like the video title, and I’d like to put the timestamp in the images as well.
This is an example of what I want to do. The text at the top and the timestamp in the images. This one’s 6x4 and I want 4x8 but other than that it looks about what I want.
I think ffmpeg probably has something to do this but I can’t seem to figure it out. I can generate individual screens but not collapse them into one. It’s also slower than programs I’ve used to do this in the past, not sure how they can do it as fast as they do. If I can’t get ffmpeg to do this, I’m open to using Python to accomplish this.