
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (111)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)
Sur d’autres sites (9962)
-
pyInstaller : Pack binary executable inside project's executable to run
18 décembre 2023, par zurTLDR ;


I would like to pack the
ffmpeg
executable inside my own executable. Currently I am getting

FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'
Skipping ./testFile202312061352.mp4 due to FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'



Details :


I am creating executable file using following command :


pyinstaller cli.py \
 --onefile \
 --add-binary /Users/<machineuser>/anaconda3/envs/my_env/bin/ffmpeg:bin
</machineuser>


The code that uses
ffmpeg
is not authored by me. And I would like to keep that part the same.

When I run from command line while
conda
environment is active I can successfully run it aspython
(or perhapsanaconda
) knows where the binaries are. I have a pretty emptycli.py
. That seems to be the entry point and I hope if it is possible I can set thebin
directory's path there ...

I am able to successfully run the application like following :


(my_env) machineUser folder % "dist/cli_mac_001202312051431" ./testFile202312061352.mp4



I would like to run like following :


(base) machineUser folder % "dist/cli_mac_001202312051431" ./testFile202312061352.mp4



I would like to keep the world out side my executable's tmp folder the same. I would not want to change something that will be "left behind" after the exec is terminated.


Question :


Can some one please mention how to modify the
pyinstaller
command or what to change incli.py
to achieve it successfully ?

-
dts to m4a (aac) ffmpeg to qaac output issue
10 avril 2014, par user8979Looking to encode 2 dts 5.1 audio sources (
Sonic Landscape
andThe Digital Experience
) to m4a (aac) 5.1 with qaac 2.35. Input piped to qaac using :ffmpeg -report -loglevel verbose -i "input.file" -vn -f wav -codec:a pcm_f32le - | qaac --cvbr 160 --quality 2 --rate=keep --ignorelength --no-delay - -o "output.m4a"
-
Sonic Landscape
duration : 18.848s,qaac
output duration : 18.859s- output .m4a duration mismatch
mediainfo
reports output is 2ch whilemediatab
andffmpeg
report output is 5.1ch (lfe)
-
The Digital Experience
duration : 32.875s,qaac
output duration : 32.875smediainfo
reports output is 2ch whilemediatab
andffmpeg
report output is 5.1ch (lfe)
- what caused the duration mismatch in the first one ? how can it be fixed ?
- is the output 2ch or 5.1ch ?
- if it is 2ch, what
qaac
option(s) leave the channels in output same as input ? - if the output is 5.1ch, does
qaac
then always preserve channels unless explicitly told otherwise ?
- if it is 2ch, what
-
-
Fast green screen video processing on android device
17 mars 2015, par Si-NI have written an app in iOS that takes two video sources, one with moving character on a green screen and any other video. The program then uses the GPUImage framework to add a chroma key shader via OpenGL ES 2 and then merges each frame (so the bottom frame now shows where the green pixels are) and outputs to a new video file. This happens very quickly, faster than real time.
I have now been tasked with porting the app to Android. I thought it would be fairly straightforward. After doing some research I think I am wrong. There is an Android port of GPUImage but it does not handle video at the moment. I have done some research and come up with a very basic idea.
I was wondering if you think this approach is feasible :
Convert one video file to match resolution and type of other video using ffmpeg or JavaCV wrappers.
Read frame by frame of each video using ffmpeg as MediaMetadataRetriever is very slow and convert into some RGB format. Use shader to apply chroma key effect so both frames are merged.
Use ffmpeg to output result to a new file.
This sounds slow, but if it sounds feasible I will try it out. I am not at all sure about making sure the 2 video resolutions / bitrate etc match. One video will be fixed at 1280 * 720 and the other video source will come from the camera on the device so will be variable. Also I think ffmpeg means using NDK which is a whole world of pain I wanted to avoid.
I have a headache thinking about it. Any advice would be greatly appreciated.