
Recherche avancée
Autres articles (76)
-
La sauvegarde automatique de canaux SPIP
1er avril 2010, parDans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...) -
Script d’installation automatique de MediaSPIP
25 avril 2011, parAfin de palier aux difficultés d’installation dues principalement aux dépendances logicielles coté serveur, un script d’installation "tout en un" en bash a été créé afin de faciliter cette étape sur un serveur doté d’une distribution Linux compatible.
Vous devez bénéficier d’un accès SSH à votre serveur et d’un compte "root" afin de l’utiliser, ce qui permettra d’installer les dépendances. Contactez votre hébergeur si vous ne disposez pas de cela.
La documentation de l’utilisation du script d’installation (...) -
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)
Sur d’autres sites (6923)
-
How to optimize FFMPEG/ Editing video ?
6 mars 2016, par user6964I have the next commands for editing video but all the process take a long time. But with the same quality of the original video.
//First cut original video
exec("ffmpeg -i $video_path_main -ss $first_time1 -t $first_time2 -s 476x268 -r 10 -b 2000k -r 30 -g 100 -ar 22050 -ab 48000 -ac 1 -strict -2 $name_first");
exec("ffmpeg -i $video_path_main -ss $second_time1 -t $second_time2 -s 476x268 -r 10 -b 2000k -r 30 -g 100 -ar 22050 -ab 48000 -ac 1 -strict -2 $name_second");
$name_edit_second = uniqid() . '.mp4'; //Then editing the second video
exec("ffmpeg -i $name_second -s 476x268 -r 10 -b 2000k -r 30 -g 100 -ar 22050 -ab 48000 -ac 1 -strict -2 -vf movie='" . $image_name . " [watermark]; [in] [watermark] overlay=308:43"."' $name_edit_second");
//Then merge video file mp4 with Mencoder
$name_total_1 = uniqid() . '.mp4';
exec("mencoder -oac pcm -ovc xvid -vf scale -xvidencopts bitrate=460 -o $name_total_1 ".$name_first.' '.$name_edit_second);
//Then convert the video to 3 formats that is necessary in my Player.
$name_total = uniqid();
//Of MP4 a FLV
exec("ffmpeg -i $name_partial -f flv -s 476x268 -r 10 -b 2000k -r 30 -g 100 -ar 22050 -ab 48000 -ac 1 $name_total.flv");
//Of MP4-Mencoder a MP4-FFMPEG
exec("ffmpeg -i $name_partial -s 476x268 -r 10 -b 2000k -r 30 -g 100 -ar 22050 -ab 48000 -ac 1 -strict -2 $name_total.mp4"));
//Of MP4 a WEBM
exec("ffmpeg -i $name_partial -acodec libvorbis -s 476x268 -r 10 -b 2000k -r 30 -g 100 -ar 22050 -ab 48000 -ac 2 -f webm $name_total.webm");I don’t know if some of parameters take much time for all the process. Or if one of this command take much time.
Note : Some videos have more than 2 parts of their original videos.
UPDATE
Maybe the parameter
-theards 1
help me in NO take a lot of resources of the CPU. Also, I need to optimize the re-encoding because with only 8 users take the 100% of resources.I run FFMPEG in a other server that return the video edited to other server where stay my application.
Sorry for my english.
-
Writing Live-Multimedia-Application using OpenGL & Co. saving output to disc [closed]
21 janvier 2013, par user1997286I want to write an application that does the following thing :
- Getting Commands via ArtNET (DMX over Ethernet, a Control Protocol) for each object (called Layer)
- each Layer could be one of the following : Live Camera Stream, Movie, Image
- each layer could be translated, rotated or stretched
- on each layer I can set filters (Like a Kaleidoscope Effect, Blur, Color Correction, etc.)
- the rsulting video-stream is in the 3d-space
- I want to display each part of the image on one Projector (in total up to 3 ones) using a TripleHead2GO (3 Projectors display a different region of my DVI-Output). Each Projecector-Image should have own Soft-Edge and Keystone parameters.
- the resulting image will also be shown on a Preview-Screen with some Information overlay.
I think all that should be possible with opengl and openal (for the movie audio)
I think I'll use C++, OpenGL for Graphics, OpenAL for Audio, if needed ffmpeg for Video conversion, Ubuntu/Debian as OS.
The software is used to do Multimedia-Shows on Concerts including Cameras & Co.
All that should happen Live (On a FullHD output), Having i7 3770, GLX 670 and 16GB of Ram for at least 8 Layers. (4 Live-Images at once + Some Overlays like the Actors Name and some Logos)
But now comes the question.
Is it also Posible to do the following with that setting :
- Writing the output Image with all the 3d translations to a Movie File (To Master a DVD later) with Audio
- Mixing Audio from different Inputs & Files (Ambience Mics, Signal from the Sound Mixer, Playbacks from my own application) to more than one Mix (eg. one Mix for the Recording, one Mix for Live)
- Stream that Output Complete or in Parts (e.g. the left Part of the Image) over the Network (For example, Projector 1 is near the Server, so I connect it using DVI, Projector 2+3 is connected to a Computer that receives the streams for that two projectors (with soft edge on each stream) and Screen 4 is outside the Concert Hall and shows the complete Live-Stream.
- What GUI-Framework should I use for that ?
- is it perhaps event performant enough to use Java for that ?
- is it posible to use that mechanism for just rendering (eg. I have stored the cut points on Disc and saved every single camera stream to change some errors later or cut out some parts)
-
ffmpeg settings or alternatives to ffmpeg on raspberry pi for video streaming
14 octobre 2016, par andreiI have a Raspberry Pi (model B) running raspbian wheezy on a 16gb SD card. I also have a 32gb flash storage attached on the usb. I’m trying to stream a video (h264 encoded mp4 file 1280x720) over the ethernet from that flash storage.
I’m using ffmpeg+ffserver. Here is ffserver.conf (relevant parts) :...
MaxBandwidth 10000
<feed>
...
FileMaxSize 100M
ACL allow 127.0.0.1
</feed>
...
<stream>
Feed feed1.ffm
Format flv
VideoSize 288x176 #made small just for testing
NoAudio
</stream>
....I start the ffserver, then call ffmpeg with this command :
ffmpeg -re -an -i /mnt/u32/main.mp4 -r 25 -bit_rate 300k http://localhost:8090/feed1.ffm
And I’m getting fps 3-5 at most. Naturally when I try to view the stream on another computer it’s very choppy and virtually unusable.
Am I missing some settings ? Or perhaps there is another streaming solution that leverages the GPU instead of just the CPU as ffmpeg does ? I’m even open to suggestions about other boards (e.g. a pandaboard ? or clustering several RPi’s ?) Also, I’m flexible about the output format.