
Recherche avancée
Médias (3)
-
MediaSPIP Simple : futur thème graphique par défaut ?
26 septembre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Video
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (74)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)
Sur d’autres sites (12280)
-
Adding ffmpeg OMX codec to Genymotion Android 4.4.2 emulator
22 avril 2016, par photonBasic Question :
Is there a way to add a new audio codec to the Genymotion Android emulator, short of downloading the entire Android source, learning how to build it, and creating my own version of Android ?
Context :
I have written a java Android app that acts as an audio renderer, as well as being a DLNA/OpenHome server and client. Think "BubbleUpnp" without video. My primary development platform is Win8.1. The program started as an ActiveState "pure-perl" DLNA MediaServer on Windows, which I then ported to Ubuntu, which I got working under Android a few years ago. It was pretty funky ... all UI being presented thru an HTTP server/jquery/jquery-ui, served from an Ubuntu shell running under Android (a trick in itself), serving up HTML pages to Chrome running on the same (Android) device. Besides being "funky" it had a major drawback that it required a valid IP address to work ... as I could not figure out how to get ubuntu to have a local loopback device for a 127.0.0.01 localhost I use the app as a "car stereo" on my boat (which is my home), which is often not hooked up to the internet.
I had a hard time getting started in Android app development because the speed of the Android emulators in Eclipse was horrid, and the ADB drivers did not work from Win8 for the longest time.
Then one day, about a year ago, I ran into Genymotion (kudos to the authors), and all of a sudden I had a workable Android development environment, so I added a Java implementation of the DLNA server, which then grew into a renderer also, using Android’s MediaPlayer class, and, adding the ability to act as a DLNA control point, and more recently also added OpenHome servers and renderers to it.
In a separate effort, I created a build environment for this program called fpCalc, based on ffMpeg, on a variety of platforms, including Win, Linux, and Android x86, arm, and arm7 devices (bitbucket.org/phorton1/) and did an extensive series of tests to determine the validity, and longevity of fpcalc fingerprints, discovering that the fpCalc fingerprint changed based on the version of ffmpeg it was built against, a separate topic to be sure, but in the process, learned at least a bit about how to build ffmpeg as well as Android shared libraries, JNI interfaces, etc.
So now the Android-Java version of the program has advanced past the old perl version, and I am debating whether I want to continue to try to build the perl version (and or add an wxPerl UI) to it.
One issue that has arisen, for me, is that the Genymotion emulator does not support WMA decoding ... as Android dropped support for WMA due to licensing issues, etc, a ways back in time ... yet my music library has significant numbers of tunes in WMA files, and I don’t want to "convert" them, my carefully thought-out philosophy is that my program does not modify the contents, or tags, or anything in the original media files that I have accumulated, or will receive in the future, rather treating them as "artifacts" worth preserving "as is". No conversion is going to make a file "better" than it was, and I wish to preserve ALL of the original sources for ALL of my music going forward.
So, I’m thinking, gee, I can build FFMPEG on 7 different platforms, and I see all these references to "OMX FFMPEG Codec Support for Android" on the net, so I’m thinking, "All I need to do is create the OMX Component and somehow get it into Genymotion".
I have studied up OMX, OpenMaxIL, seen Michael Chen’s posts, seen the stack overflow questions
How to make ffmpeg codec componet as OMX component
and
Android : How to integrate a decoder to multimedia framework
and Cedric Fung’s page https://vec.io/posts/use-android-hardware-decoder-with-omxcodec-in-ndk, and Michael Chen’s repository at https://github.com/omxcodec , as well as virtually every other page on the net that mentions any combination of libstagefright, OMX, Genymotion, and FFMPEG.
(this page would not let me put more than 2 links as i don’t have a "10" reputation, or I would have listed some of the sources I have seen) ..
My Linux development environment is a Ubuntu12.04 vbox running on my win machine. I have downloaded and run the Android-x86 iso as a vbox, and IT contains the ffmpeg codecs, but unfortunately, it neither supports a wifi interface, nor the vbox "guest additions", so it has a really funky mouse. I tried for about 3 days to address those two issues, but in the end do not feel it is usable for my puproses, and I really like the way genymotion "feels", particularly the moust support, so I’d like to keep genymotion as my "windows android" virtual device under which I may run my program, deprecate and stop using my old perl source,
except genymotion does not support WMA files ...
Several side notes :
(a) There is no good way to write a single sourced application in Java that runs natively in Windows, AND as an Android app.
(b) I don’t want to reboot my Windows machine to a "real" Android device just to play my music files. The machine has to stay in Windows as I use it for other things as well.
(c) I am writing this as my machine is in the 36th hour of downloading the entire ASOP source code base to a partition in my Ubuntu vbox while I am sitting in a hotel room on a not-so-good internet connection in Panama City, Panama, before I return to my boat in remote Bocas Del Toro Panama, where the internet connection is even worse.
(d) I did get WMA decoding to work in my app by calling my FFMPEG executable from Java (converting it to either WAV/PCM or AAC), but, because of limitations in Android’s MediaPlayer, it does not work well, particularly for remotely hosted WMA files ... MediaPlayer insists on having the whole file present before it starts to play, which can take several seconds or longer, and I am hoping that by getting a ’real’ WMA codec underneath MediaPlayer, that problem will just disappear ....
So, I’m trying to figure this whole mess out. There are a lot of tantalizing clues, and suggestions, but what I have found, or at least what I am starting to believe, is that if I want to add a simple WMA audio decoding codec to Android (Genymotion), not only do I have to download, basically, the ENTIRE ASOP Android source tree, and learn a new set of tools (repo, etc), but I have to (be able to) rebuild, from scratch, the entire Android system, esp. libstagefright.so in such a way as to be COMPLETELY compatible with the existing one in GenyMotion, while at the same time adding ffmpeg codecs ala Michael Chen’s page.
And I’m just asking, is it, could it really be that difficult ?
Anyways, this makes me crazy. Is there no way to just build a new component, or at worst a new OMX core, and add it to Genymotion, WITHOUT building all of Android, and preferably, based only on the OMX h files ? Or do I REALLY have to replace the existing libstagefright.so, which means, basically, rebuilding all of Android ...
p.s. I thought it would be nice to get this figured out, build it, and then post the installable new FFMPEG codecs someplace for other people to use, so that they don’t also grow warts on their ears and have steam shooting out of their eyeballs, while they get old trying to figure it out ....
-
How to find the offset by which the each video must be delayed to sync them perfectly ?
19 janvier 2023, par PirateApp

Let me explain my use case a bit here


- 

-
We are 4 of us playing the same game


-
3 of us recording mkv using OBS studio at 60 fps, 4th guy recording with some other tool at 30 fps


-
Each mission starts at a cutscene and ends with a cutscene


-
I would like to create a video like the image you see above starting at ending at the same points but the intermediate stuff is basically what each player is doing in the game


-
Currently, I follow a process slightly complicated to achieve this and was wondering if there is an easier way to do this


-
My current process


-
Take a screenshot from one of the videos of the cutscene


















Run a search for this screen inside the other videos using the command below


ffmpeg 
 -i "video1.mkv"
 -r 1
 -loop 1
 -i 1.png
 -an -filter_complex "blend=difference:shortest=1,blackframe=90:32"
 -f null -



- 

-
It gives me a result like this in each video


[Parsed_blackframe_1 @ 0x600000c9c000] frame:263438 pblack:91 pts:4390633 t:4390.633000 type:P last_keyframe:263400






Use the start time from each of the results to create a split screen video using the command below


ffmpeg 
 -i first.mkv
 -i second.mkv
 -i third.mkv
 -i fourth.mp4
 -filter_complex " 
 nullsrc=size=640x360 [base];
 [0:v] trim=start=35.567,setpts=PTS-STARTPTS, scale=320x180 [upperleft]; 
 [1:v] trim=start=21.567,setpts=PTS-STARTPTS, scale=320x180 [upperright];
 [2:v] trim=start=41.233,setpts=PTS-STARTPTS, scale=320x180 [lowerleft]; 
 [3:v] trim=start=142.933333,setpts=PTS-STARTPTS, scale=320x180 [lowerright];
 [0:a] atrim=start=35.567,asetpts=PTS-STARTPTS [outa]; [base][upperleft] overlay=shortest=1 [tmp1];



- 

-
As you can see, it is a complex process and depends completely a lot on what image I am capturing. Sometimes, I find out that stuff is still slightly off in the beginning or end because the images dont match a 100%. My guess is that the frame rate is different for each video not to mention 3 of them are mkv inputs and one is an mp4 input


-
Is there a better way to get the offset by how much each video should be moved to sync them perfectly ?


-
The only way that I can think of is to take 1 video


-
Take a starting timestamp and an ending timestamp, say with a total duration of 30s


-
Take the second video


-
Start from 0 to 30s and compare the frames in both videos, set a score


-
start from 0.001 to 30.001 and compare the frames, set a score


-
start from 0.002 to 30.002 and compare the frames, set a score


-
Basically increment the second video by 0.001 second each time and find out the part with the highest score


-
Any better way of doing this ? I need to run this on 100s if not 1000s of videos
























-
-
Surfaceview for subtitles alpha does not work
27 mai 2018, par user654628Goal : trying to build video player with subtitles for android. Video can be low resolution but the subtitles should be resolution of phone (such that if video is 720p, the subtitles should render to screen size say 1080p).
Issue : I am using FFMPEG to render a frame at say 720p but phone device is 1080p. I need to display subtitles that are different resolution than the subtitles resolution so pixel blending is difficult.
I first tried to scale the frame (AVFrame) with sws_convert but each frame took 80ms so that is not an option (since it is running software).
Then I tried two surface views, one for the video and one for subtitles where video would be 720p and subtitles SurfaceView is 1080p, then the video scales up to the phone size. The issue here is that the subtitles are not translucent. Black opacity 0 would be transparent but white with alpha 0 is still white. Why is this ?
//Code from Java, the view that extends FrameLayout
public VideoView(@NonNull Context context, @Nullable AttributeSet attrs, int defStyleAttr) {
super(context, attrs, defStyleAttr);
mVideoSurface = new SurfaceView(context);
mSubtitlesSurface = new SurfaceView(context);
addView(mVideoSurface);
addView(mSubtitlesSurface);
mVideoSurface.getHolder().addCallback(mSurfaceCallback);
mSubtitlesSurface.getHolder().addCallback(mSurfaceCallback);
mSubtitlesSurface.setZOrderMediaOverlay(true);
mSubtitlesSurface.getHolder().setFormat(PixelFormat.TRANSLUCENT);
//etc
}Eventually I tried as a test to render a square to the subtitle surface view (C++)
// Render the video frame, now render the subtitle frame
ANativeWindow_Buffer buffer;
ANativeWindow_setBuffersGeometry(subWindow, width, height, WINDOW_FORMAT_RGBA_8888);
if ((ret = ANativeWindow_lock(subWindow, &buffer, NULL)) < 0) {
return ret;
}
for (int j = height/2; j < height/2 + 100; j++) {
for (int i = width/2; i < width/2 + 100; i++) {
uint8_t * d = (uint8_t*)buffer.bits + j * (buffer.stride * 4) + i * 4;
d[0] = 0xff;
d[1] = 0xff;
d[2] = 0xFF;
d[3] = 0; /* alpha */
}
}
ANativeWindow_unlockAndPost(subWindow);So above code should render a white square in the image with 0 alpha (so should be invisible), but it is shown. If I change it to yellow with alpha 0 it will be visible but not the correct color. If I change to white with 1 alpha, it is white and opaque. If I use black with alpha 0xCC, it is invisible, only if alpha is 0xFF then it is visible as black. Seems to have no translucency even though I added it to the SurfaceHolder. Why is it like this ? I can add more code if needed.
Is my only option to do what I want to render frame as a texture in OpenGL and (GLSurfaceView), resize the image to phone resolution and blend the alpha subtitles onto the frame as a texture ?
Thanks in advance.