
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (64)
-
Other interesting software
13 avril 2011, parWe don’t claim to be the only ones doing what we do ... and especially not to assert claims to be the best either ... What we do, we just try to do it well and getting better ...
The following list represents softwares that tend to be more or less as MediaSPIP or that MediaSPIP tries more or less to do the same, whatever ...
We don’t know them, we didn’t try them, but you can take a peek.
Videopress
Website : http://videopress.com/
License : GNU/GPL v2
Source code : (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Taille des images et des logos définissables
9 février 2011, parDans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)
Sur d’autres sites (4358)
-
Haskell - Turning multiple image-files into one video-file using the ffmpeg-light package
25 avril 2021, par oRoleBackground

I wrote an application for image-processing which uses theffmpeg-light
package to fetch all the frames of a given video-file so that the program afterwards is able to apply grayscaling, as well as edge detection alogrithms to each of the frames.

Now I'm trying to put all of the frames back into a single video-file.


Used Libs

ffmpeg-light-0.12.0

JuicyPixels-3.2.8.3

...

What have I tried ?

I have to be honest, I didn't really try anything because I'm kinda clueless where and how to start. I saw that there is a package calledCommand
which allows running processes/commands using the command line. With that I could use ffmpeg (notffmpeg-light
) to create a video out of image-files which I would have to save to the hard drive first but that would be kinda hacky.
Within the documentation of
ffmpeg-light
on hackage (ffmpeg-light docu) I found the frameWriter function which sounds promising.

frameWriter :: EncodingParams -> FilePath -> IO (Maybe (AVPixelFormat, V2 CInt, Vector CUChar) -> IO ()) 



I guess
FilePath
would be the location where the video file gets stored but I can't really imagine how to apply the frames asEncodingParams
to this function.

Others

I can access :

- 

r
,g
,b
,a
as well asy
.a
values- image width / height / format






Question

Is there a way to achieve this using theffmpeg-light
package ?

As the
ffmpeg-light
package lacks of documentation when it comes to conversion from images to video, I really would appreciate your help. (I do not expect a fully working solution.)

Code

The code that reads the frames :

-- Gets and returns all frames that a given video contains
getAllFrames :: String -> IO [(Double, DynamicImage)]
getAllFrames vidPath = do 
 result <- try (imageReaderTime $ File vidPath) :: IO (Either SomeException (IO (Maybe (Image PixelRGB8, Double)), IO()))
 case result of 
 Left ex -> do 
 printStatus "Invalid video-path or invalid video-format detected." "Video" 
 return []
 Right (getFrame, _) -> addNextFrame getFrame [] 

-- Adds up all available frames to a video.
addNextFrame :: IO (Maybe (Image PixelRGB8, Double)) -> [(Double, DynamicImage)] -> IO [(Double, DynamicImage)]
addNextFrame getFrame frames = do
 frame <- getFrame
 case frame of 
 Nothing -> do 
 printStatus "No more frames found." "Video"
 return frames
 _ -> do 
 newFrameData <- fmap ImageRGB8 . swap . fromJust <$> getFrame 
 printStatus ("Frame: " ++ (show $ length frames) ++ " added.") "Video"
 addNextFrame getFrame (frames ++ [newFrameData]) 



Where I am stuck / The code that should convert images to video :


-- Converts from several images to video
juicyToFFmpeg :: [Image PixelYA8] -> ?
juicyToFFmpeg imgs = undefined



-
IframeExtractor don't output sound with rtsp
9 janvier 2013, par KamaxI use IframeExtractor from the git mooncatventure, it play nice the .mov file.
But when i try to read a rtsp stream, i hear no sound.This is the FFMEG dump from the rtsp stream :
Metadata:
title : unknown
comment : unknown
Duration: N/A, start: 49435.000589, bitrate: 258 kb/s
Program 3223
No Program
Stream #0:0: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 720x576 [SAR 64:45 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0:1(fra): Audio: aac ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 142 kb/s
Stream #0:2(fra): Subtitle: dvb_teletext ([6][0][0][0] / 0x0006)
Stream #0:3(qad): Audio: aac ([15][0][0][0] / 0x000F), 48000 Hz, mono, fltp, 47 kb/s
Stream #0:4(qaa): Audio: aac ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 68 kb/sAnd this is the dump from the local .mov file that work :
Metadata:
major_brand : qt
minor_version : 0
compatible_brands: qt
creation_time : 2010-01-17 21:52:33
model : iPhone 3GS
model-eng : iPhone 3GS
date : 2010-01-17T16:52:33-0500
date-eng : 2010-01-17T16:52:33-0500
encoder : 3.1.2
encoder-eng : 3.1.2
make : Apple
make-eng : Apple
Duration: 00:00:03.25, start: 0.000000, bitrate: 3836 kb/s
Stream #0:0(und): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p, 640x480, 3695 kb/s, 30.02 fps, 30 tbr, 600 tbn, 1200 tbc
Metadata:
rotate : 90
creation_time : 2010-01-17 21:52:33
handler_name : Core Media Data Handler
Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 63 kb/s
Metadata:
creation_time : 2010-01-17 21:52:33
handler_name : Core Media Data HandlerThe audio class that manage sounds contain a codec detector which say that the codec CODEC_ID_AAC is found for the two input :
audioStreamBasicDesc_.mFormatFlags = 0;
switch (_audioCodecContext->codec_id) {
case CODEC_ID_MP3:
audioStreamBasicDesc_.mFormatID = kAudioFormatMPEGLayer3;
break;
case CODEC_ID_AAC:
audioStreamBasicDesc_.mFormatID = kAudioFormatMPEG4AAC;
audioStreamBasicDesc_.mFormatFlags = kMPEG4Object_AAC_Main;
NSLog(@"audio format aac %s (%d) is supported", _audioCodecContext->codec_name, _audioCodecContext->codec_id);
break;
}I see data going into the buffer but i hear nothing. It's maybe audioStreamBasicDesc_ which has wrong settings but i can't find what.
Is it possible that it's not the same AAC codec ?
Has someone experienced the same issue ?
Any help are welcome, i'm on this problem since some days now.
Edit :
I have found a error that i had not before, i don't know how to resolve it. If i change audioStreamBasicDesc.mFramesPerPacket to 0 or divided by 2, the error message dissapear.AudioConverterNew returned 'fmt?'
Prime failed ('fmt?'); will stop (72000/0 frames) -
SNES Hardware Compression
16 juin 2011, par Multimedia Mike — Game HackingI was browsing the source code for some Super Nintendo Entertainment System (SNES) emulators recently. I learned some interesting things about compression hardware. I had previously uncovered one compression algorithm used in an SNES title but that was implemented in software.
SNES game cartridges — being all hardware — were at liberty to expand the hardware capabilities of the base system by adding new processors. The most well-known of these processors was the Super FX which allows for basic polygon graphical rendering, powering such games as Star Fox. It was by no means the only such add-on processor, though. Here is a Wikipedia page of all the enhancement chips used in assorted SNES games. A number of them mention compression and so I delved into the emulators to find the details :
- The Super FX is listed in Wikipedia vaguely as being able to decompress graphics. I see no reference to decompression in emulator source code.
- DSP-3 emulation source code makes reference to LZ-type compression as well as tree/symbol decoding. I’m not sure if the latter is a component of the former. Wikipedia lists the chip as supporting "Shannon-Fano bitstream decompression."
- Similar to Super FX, the SA-1 chip is listed in Wikipedia as having some compression capabilities. Again, either that’s not true or none of the games that use the chip (notably Super Mario RPG) make use of the feature.
- The S-DD1 chip uses arithmetic and Golomb encoding for compressing graphics. Wikipedia refers to this as the ABS Lossless Entropy Algorithm. Googling for further details on that algorithm name yields no results, but I suspect it’s unrelated to anti-lock brakes. The algorithm is alleged to allow Star Ocean to smash 13 MB of graphics into a 4 MB cartridge ROM (largest size of an SNES cartridge).
- The SPC7110 can decompress data using a combination of arithmetic coding and Z-curve/Morton curve reordering.
No, I don’t plan to implement codecs for these schemes. But it’s always comforting to know that I could.
Not directly a compression scheme, but still a curious item is the MSU1 concept put forth by the bsnes emulator. This is a hypothetical coprocessor implemented by bsnes that gives an emulated cartridge access to a 4 GB address space. What to do with all this space ? Allow for the playback of uncompressed PCM audio as well as uncompressed video at 240x144x256 colors @ 30 fps. According to the docs and the source code, the latter feature doesn’t appear to be implemented, though ; only the raw PCM playback.