
Recherche avancée
Médias (10)
-
Demon Seed
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Audio
-
Demon seed (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
The four of us are dying (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Corona radiata (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Lights in the sky (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
-
Head down (wav version)
26 septembre 2011, par
Mis à jour : Avril 2013
Langue : English
Type : Audio
Autres articles (17)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
Soumettre bugs et patchs
10 avril 2011Un logiciel n’est malheureusement jamais parfait...
Si vous pensez avoir mis la main sur un bug, reportez le dans notre système de tickets en prenant bien soin de nous remonter certaines informations pertinentes : le type de navigateur et sa version exacte avec lequel vous avez l’anomalie ; une explication la plus précise possible du problème rencontré ; si possibles les étapes pour reproduire le problème ; un lien vers le site / la page en question ;
Si vous pensez avoir résolu vous même le bug (...) -
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (3046)
-
Using ImageMagick to efficiently stitch together a line scan image
2 octobre 2018, par rkantosI’m looking for alternatives for line scan cameras to be used in sports timing, or rather in the part where placing needs to be figured out. I found that common industrial cameras can readily match the speed of commercial camera solutions at >1000 frames per second. For my needs, usually the timing accuracy is not important, but the relative placing of athletes. I figured I could use one of the cheapest Basler, IDS or any other area scan industrial cameras for this purpose. Of course there are line scan cameras that can do a lot more than a few thousand fps (or hz), but it is possible to get area scan cameras that can do the required 1000-3000fps for less than 500€.
My holy grail would of course be the near-real time image composition capabilities of FinishLynx (or any other line scan system), basically this part : https://youtu.be/7CWZvFcwSEk?t=23s
The whole process I was thinking for my alternative is :
- Use Basler Pylon Viewer (or other software) to record 2px wide images at the camera’s fastest read speed. For the camera I am
currently using it means it has to be turned on it’s side and the
height needs to be reduced, since it is the only way it will read
1920x2px frames @ >250fps - Make a program or batch script that then stitches these 1920x2px frames together to, for example one second of recording 1000*1920x2px
frames, meaning a resulting image with a resolution of 1920x2000px
(Horizontal x Vertical). - Finally using the same program or another way, just rotate the image so it reflects how the camera is positioned, thus achieving an image
with a resolution of 2000x1920px (again Horizontal x Vertical) - Open the image in an analyzing program (currently ImageJ) to quickly analyze results
I am no programmer, but this is what I was able to put together just using batch scripts, with the help of stackoverflow of course.
- Currently recording a whole 10 seconds for example to disk as a raw/mjpeg(avi/mkv) stream can be done in real time.
- Recording individual frames as TIFF or BMP, or using FFMPEG to save them as PNG or JPG takes 20-60 seconds The appending and rotation
then takes a further 45-60 seconds
This all needs to be achieved in less than 60 seconds for 10 seconds of footage(1000-3000fps @ 10s = 10000-30000 frames) , thus why I need something faster.
I was able to figure out how to be pretty efficient with ImageMagick :
magick convert -limit file 16384 -limit memory 8GiB -interlace Plane -quality 85 -append +rotate 270 “%folder%\Basler*.Tiff” “%out%”
#%out% has a .jpg -filename that is dynamically made from folder name and number of frames.This command works and gets me 10000 frames encoded in about 30 seconds on a i5-2520m (most of the processing seems to be using only one thread though, since it is working at 25% cpu usage). This is the resulting image : https://i.imgur.com/OD4RqL7.jpg (19686x1928px)
However since recording to TIFF frames using Basler’s Pylon Viewer takes just that much longer than recording an MJPEG video stream, I would like to use the MJPEG (avi/mkv) file as a source for the appending. I noticed FFMPEG has “image2pipe” -command, which should be able to directly give images to ImageMagick. I was not able to get this working though :
$ ffmpeg.exe -threads 4 -y -i "Basler acA1920-155uc (21644989)_20180930_043754312.avi" -f image2pipe - | convert - -interlace Plane -quality 85 -append +rotate 270 "%out%" >> log.txt
ffmpeg version 3.4 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 7.2.0 (GCC)
configuration: –enable-gpl –enable-version3 –enable-sdl2 –enable-bzlib –enable-fontconfig –enable-gnutls –enable-iconv –enable-libass –enable-libbluray –enable-libfreetype –enable-libmp3lame –enable-libopenjpeg –enable-libopus –enable-libshine –enable-libsnappy –enable-libsoxr –enable-libtheora –enable-libtwolame –enable-libvpx –enable-libwavpack –enable-libwebp –enable-libx264 –enable-libx265 –enable-libxml2 –enable-libzimg –enable-lzma –enable-zlib –enable-gmp –enable-libvidstab –enable-libvorbis –enable-cuda –enable-cuvid –enable-d3d11va –enable-nvenc –enable-dxva2 –enable-avisynth –enable-libmfx
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
Invalid Parameter - -interlace
[mjpeg @ 000000000046b0a0] EOI missing, emulating
Input #0, avi, from 'Basler acA1920-155uc (21644989)_20180930_043754312.avi’:
Duration: 00:00:50.02, start: 0.000000, bitrate: 1356 kb/s
Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj422p(pc, bt470bg/unknown/unknown), 1920x2, 1318 kb/s, 200 fps, 200 tbr, 200 tbn, 200 tbc
Stream mapping:
Stream #0:0 -> #0:0 (mjpeg (native) -> mjpeg (native))
Press [q] to stop, [?] for help
Output #0, image2pipe, to ‘pipe:’:
Metadata:
encoder : Lavf57.83.100
Stream #0:0: Video: mjpeg, yuvj422p(pc), 1920x2, q=2-31, 200 kb/s, 200 fps, 200 tbn, 200 tbc
Metadata:
encoder : Lavc57.107.100 mjpeg
Side data:
cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
av_interleaved_write_frame(): Invalid argument
Error writing trailer of pipe:: Invalid argument
frame= 1 fps=0.0 q=1.6 Lsize= 0kB time=00:00:00.01 bitrate= 358.4kbits/s speed=0.625x
video:0kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000%
Conversion failed!If I go a bit higher for the height, I no longer get the “[mjpeg @ 000000000046b0a0] EOI missing, emulating” -error. However the whole thing will only work with <2px high/wide footage.
edit : Oh yes, I can also use
ffmpeg -i file.mpg -r 1/1 $filename%03d.bmp
orffmpeg -i file.mpg $filename%03d.bmp
to extract all the frames from the MJPEG/RAW stream. However this is an extra step I do not want to take. (just deleting a folder of 30000 jpgs takes 2 minutes alone…)Can someone think of a working solution for the piping method or a totally different alternative way of handling this ?
- Use Basler Pylon Viewer (or other software) to record 2px wide images at the camera’s fastest read speed. For the camera I am
-
Sequencing MIDI From A Chiptune
28 avril 2013, par Multimedia Mike — Outlandish BrainstormsThe feature requests for my game music appreciation website project continue to pour in. Many of them take the form of “please add player support for system XYZ and the chiptune library to go with it.” Most of these requests are A) plausible, and B) in process. I have also received recommendations for UI improvements which I take under consideration. Then there are the numerous requests to port everything from Native Client to JavaScript so that it will work everywhere, even on mobile, a notion which might take a separate post to debunk entirely.
But here’s an interesting request about which I would like to speculate : Automatically convert a chiptune into a MIDI file. I immediately wanted to dismiss it as impossible or highly implausible. But, as is my habit, I started pondering the concept a little more carefully and decided that there’s an outside chance of getting some part of the idea to work.
Intro to MIDI
MIDI stands for Musical Instrument Digital Interface. It’s a standard musical interchange format and allows music instruments and computers to exchange musical information. The file interchange format bears the extension .mid and contains a sequence of numbers that translate into commands separated by time deltas. E.g. : turn key on (this note, this velocity) ; wait x ticks ; turn key off ; wait y ticks ; etc. I’m vastly oversimplifying, as usual.MIDI fascinated me back in the days of dialup internet and discrete sound cards (see also my write-up on the Gravis Ultrasound). Typical song-length MIDI files often ranged from a few kilobytes to a few 10s of kilobytes. They were significantly smaller than the MOD et al. family of tracker music formats mostly by virtue of the fact that MIDI files aren’t burdened by transporting digital audio samples.
I know I’m missing a lot of details. I haven’t dealt much with MIDI in the past… 15 years or so (ever since computer audio became a blur of MP3 and AAC audio). But I’m led to believe it’s still relevant. The individual who requested this feature expressed an interest in being able to import the sequenced data into any of the many music programs that can interpret .mid files.The Pitch
To limit the scope, let’s focus on music that comes from the 8-bit Nintendo Entertainment System or the original Game Boy. The former features 2 square wave channels, a triangle wave, a noise channel, and a limited digital channel. The latter creates music via 2 square waves, a wave channel, and a noise channel. The roles that these various channels usually play typically break down as : square waves represent the primary melody, triangle wave is used to simulate a bass line, noise channel approximates a variety of percussive sounds, and the DPCM/wave channels are fairly free-form. They can have random game sound effects or, if they are to assist in the music, are often used for more authentic percussive sounds.The various channels are controlled via an assortment of memory-mapped hardware registers. These registers are fed values such as frequency, volume, and duty cycle. My idea is to modify the music playback engine to track when various events occur. Whenever a channel is turned on or off, that corresponds to a MIDI key on or off event. If a channel is already playing but a new frequency is written, that would likely count as a note change, so log a key off event followed by a new key on event.
There is the major obstacle of what specific note is represented by a channel in a particular state. The MIDI standard defines 128 different notes spanning 11 octaves. Empirically, I wonder if I could create a table which maps the assorted frequencies to different MIDI notes ?
I think this strategy would only work with the square and triangle waves. Noise and digital channels ? I’m not prepared to tackle that challenge.
Prior Work ?
I have to wonder if there is any existing work in this area. I’m certain that people have wanted to do this before ; I wonder if anyone has succeeded ?Just like reverse engineering a binary program entails trying to obtain a higher level abstraction of a program from a very low level representation, this challenge feels like reverse engineering a piece of music as it is being performed and automatically expressing it in a higher level form.
-
iOS allocation grow using x264 encoding
19 juillet 2013, par cssmhylI get the video yuv data in callback and save the image data by NSData.Then I put the data into NSData,And put the array to queue(NSMutableArray). These are code :
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
if ([Application sharedInstance].isRecording) {
if (captureOutput == self.captureManager.videOutput) {
uint64_t capturedHostTime = [self GetTickCount];
int allSpace = capturedHostTime - lastCapturedHostTime;
NSNumber *spaces = [[NSNumber alloc] initWithInt:allSpace];
NSNumber *startTime = [[NSNumber alloc] initWithUnsignedLongLong:lastCapturedHostTime];
lastCapturedHostTime = capturedHostTime;
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *baseAddress0 = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
uint8_t *baseAddress1 = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
NSData *baseAddress0Data = [[NSData alloc] initWithBytes:baseAddress0 length:width*height];
NSData *baseAddress1Data = [[NSData alloc] initWithBytes:baseAddress1 length:width*height/2];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
NSArray *array = [[NSArray alloc] initWithObjects:baseAddress0Data,baseAddress1Data,spaces,startTime ,nil];
[baseAddress0Data release];
[baseAddress1Data release];
[spaces release];
[startTime release];
@synchronized([Application sharedInstance].pearVideoQueue){
[[Application sharedInstance] enqueuePearVideo:[Application sharedInstance].pearVideoQueue withData:array];
[array release];
}
}
}
}now,I run an operation and get data from the queue ,then encode them by x264.I destory de array after encoding.
- (void)main{
while ([Application sharedInstance].pearVideoQueue) {
if (![Application sharedInstance].isRecording) {
NSLog(@"encode operation break");
break;
}
if (![[Application sharedInstance].pearVideoQueue isQueueEmpty]) {
NSArray *pearVideoArray;
@synchronized([Application sharedInstance].pearVideoQueue){
pearVideoArray = [[Application sharedInstance].pearVideoQueue dequeue];
[[Application sharedInstance] encodeToH264:pearVideoArray];
[pearVideoArray release];
pearVideoArray = nil;
}
} else{
[NSThread sleepForTimeInterval:0.01];
}
}
}this is encoding method
- (void)encodeX264:(NSArray *)array{
int i264Nal;
x264_picture_t pic_out;
x264_nal_t *p264Nal;
NSNumber *st = [array lastObject];
NSNumber *sp = [array objectAtIndex:2];
uint64_t startTime = [st unsignedLongLongValue];
int spaces = [sp intValue];
NSData *baseAddress0Data = [array objectAtIndex:0];
NSData *baseAddress1Data = [array objectAtIndex:1];
const char *baseAddress0 = baseAddress0Data.bytes;
const char *baseAddress1 = baseAddress1Data.bytes;
if (baseAddress0 == nil) {
return;
}
memcpy(p264Pic->img.plane[0], baseAddress0, PRESENT_FRAME_WIDTH*PRESENT_FRAME_HEIGHT);
uint8_t * pDst1 = p264Pic->img.plane[1];
uint8_t * pDst2 = p264Pic->img.plane[2];
for( int i = 0; i < PRESENT_FRAME_WIDTH*PRESENT_FRAME_HEIGHT/4; i ++ )
{
*pDst1++ = *baseAddress1++;
*pDst2++ = *baseAddress1++;
}
if( x264_encoder_encode( p264Handle, &p264Nal, &i264Nal, p264Pic ,&pic_out) < 0 )
{
fprintf( stderr, "x264_encoder_encode failed/n" );
}
i264Nal = 0;
if (i264Nal > 0) {
int i_size;
int spslen =0;
unsigned char spsData[1024];
char * data = (char *)szBuffer+100;
memset(szBuffer, 0, sizeof(szBuffer));
if (ifFirstSps) {
ifFirstSps = NO;
if (![Application sharedInstance].ifAudioStarted) {
NSLog(@"video first");
[Application sharedInstance].startTick = startTime;
NSLog(@"startTick: %llu",startTime);
[Application sharedInstance].ifVideoStarted = YES;
}
}
for (int i=0 ; inal_buffer_size < p264Nal[i].i_payload*3/2+4) {
p264Handle->nal_buffer_size = p264Nal[i].i_payload*2+4;
x264_free( p264Handle->nal_buffer );
p264Handle->nal_buffer = x264_malloc( p264Handle->nal_buffer_size );
}
i_size = p264Nal[i].i_payload;
memcpy(data, p264Nal[i].p_payload, p264Nal[i].i_payload);
int splitNum = 0;
for (int i=0; i=1) {
timeSpace = spaces/(i264Nal-1)*i;
}else{
timeSpace = spaces/i264Nal*i;
}
int timeStamp = startTime-[Application sharedInstance].startTick + timeSpace;
switch (type) {
case NALU_TYPE_SPS:
spslen = i_size-splitNum;
memcpy(spsData, data, spslen);
break;
case NALU_TYPE_PPS:
timeStamp = timeStamp - timeSpace;
[self pushSpsAndPpsQueue:(char *)spsData andppsData:(char *)data withPPSlength:spslen andPPSlength:(i_size-splitNum) andTimeStamp:timeStamp];
break;
case NALU_TYPE_IDR:
[self pushVideoNALU:(char *)data withLength:(i_size-splitNum) ifIDR:YES andTimeStamp:timeStamp];
break;
case NALU_TYPE_SLICE:
case NALU_TYPE_SEI:
[self pushVideoNALU:(char *)data withLength:(i_size-splitNum) ifIDR:NO andTimeStamp:timeStamp];
break;
default:
break;
}
}
}
}the question is :
I used instruments and found that the data I captured increase ,but NSLog
show that the space-time I create de array and release it did not increase,and when I release ,the array's retain count is 1. the object's retain count it contains is also one.
then I didn't encode,the memory didn't increase...I was confused...please help..
the image pixel is 640x480.intruments leaks picture: