
Recherche avancée
Autres articles (54)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)
Sur d’autres sites (3851)
-
AE per frame rendering FFMPEG forming to video
9 février 2018, par Deckard CainI’m trying to setup an automated per frame rendering system using After Effects and FFMPEG. The idea here is that my slave nodes (running AE), will generate the frames and save them immediately to a Samba share (this way I can team multiple slaves together to tackle the same project file and we aren’t writing an 8GB AVI file, then compressing and deleting it when we could just render 300MB of frames and form it).
The database and Samba share are running on FreeBSD. This machine will then take those frames and use FFMPEG to combine them into an MP4 video.
The issue that I’m running into, is that when I render out the After Effects project file directly to an AVI file (one slave, no individual frame rendering), the video length is 1:31. When I render out the exact same project file into individual frames, then run it through FFMPEG to combine and compress them, the output is 1:49.
I have tried a billion different things to make the length of the video the same, but can’t seem to make it so :/
aerender.exe -mp -project %PROJECTFILE% -comp %COMPOSITION% -output [########].jpg
^Keep in mind, there may be 99999999 frames, or as little as 1 that needs to be rendered (if we need to re-render a specific section because of an asset change)
ffmpeg -nostdin -i %FRAMELOCATION% -c:v libx264 -preset veryfast -an -y outputVideo.mp4
-
Get frames from a video into a matrix
19 juin 2019, par NephilimI’m currently trying to implement a compression algorithm(frame prediction) for an assignment. I am not looking for thumbnail files, or even just a shell command to generate something for me. My problem is specifically integrating it with a golang program.
I just started and I’m already stuck. I’m supposed to get each frame out of a video,divide it into I P and B frames and perform inter-coding(compress the frame itself), then perform intra-coding(between the frames).
Right now I cannot even get started on the above problems, because I have no idea how to read the video as something I could use in code. Apparently, the only library I can think of is ffmpeg. FFMPEG can get separate frames, apparently even i p and b frames.
ffmpeg -i <inputfile> -vf '[in]select=eq(pict_type\,B)[out]' b.frames.mp4
</inputfile>But this is just another video output, that I do not know how to open.
What I was thinking of was outputting frames into bitmaps(?), then reading each bitmap separately, to reconstruct three 3D matrixes, of i frames, p frames and b frames. However this seems like quite a feat. Someone, somewhere has definitely tried to parse a video into a 3D matrix and has found a better solution than what I’m thinking of.To be concise, I have a video, I need a 3D matrix. The 3D matrix is a matrix of 2D matrixes, which represent a frame in the video. Each point in a 3D matrix is a pixel(or whatever the equivalent is in videos).
-
Intercept and divert graphic output before it hits the linux framebuffer, not using X11
13 octobre 2018, par ApolloI have been researching this issue for about a week, and it may be that I don’t know the right questions to ask.
I am running a debian distro (Raspbian Stretch on the RaspberryPi). I am not using X11. Instead, I am booting straight to the command line, and will eventually probably boot straight up headless, starting a program at startup or sshing in to interact.
What I need to do is to start an application (a game engine specifically) that draws graphics to the framebuffer. Then, I need to intercept that stream before it makes it into the framebuffer, so I can work with it in real time.
Ultimately, I am using ffmpeg to compress and stream video of the output to another Pi. So, I want to be able to start an application and stream its output over a LAN, while still being able to interact with the command line in a separate thread.
I have the ffmpeg command to pull from
/dev/fb0
, and have successfully started the graphics application and streamed the content. But, is there anyway to intercept, capture, or redirect that application’s output so it never actually hits the framebuffer ? In my searching I have found many examples of writing to or reading from the framebuffer, but nothing about stopping content before it reaches the buffer.I am happy with any solution that uses an existing package or application, or for C or RUST code that accomplishes what I need.
Thank you