
Recherche avancée
Médias (2)
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
Autres articles (86)
-
Le profil des utilisateurs
12 avril 2011, parChaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (5070)
-
Xvfb records a black screen
11 mai 2024, par VivekI am trying a record a video by running xvfb inside a docker image. No matter what I do it gives me black screen.


Screen size same in xvfb and ffmpeg and puppeteer.


It will would really great if someone can help.



start-xvfb.sh
---------------------------------------------------------------------
# Start Xvfb
Xvfb :99 -screen 0 1280x720x24 &

# Set the display environment variable
export DISPLAY=:99

# Run the application (assuming it starts with npm start)
npm run dev



Dockerfile


FROM node:lts-alpine3.19

# Install dependencies using apk
RUN apk update && \
 apk add --no-cache \
 gnupg \
 ffmpeg \
 libx11 \
 libxcomposite \
 libxdamage \
 libxi \
 libxtst \
 nss \
 cups-libs \
 libxrandr \
 alsa-lib \
 pango \
 gtk+3.0 \
 xvfb \
 bash \
 curl \
 udev \
 ttf-freefont \
 chromium \
 chromium-chromedriver

# Set working directory
WORKDIR /app

# Copy package.json and install dependencies
COPY package.json .
RUN npm install --force

# Copy remaining source code
COPY . .

# Add a script to start Xvfb
COPY start-xvfb.sh /app/start-xvfb.sh
RUN chmod +x /app/start-xvfb.sh

# Expose the port
EXPOSE 4200
EXPOSE 3000

# Command to start Xvfb and run the application
CMD ["./start-xvfb.sh"]



Below


this is code code that launches puppeteer and from a nodejs application and create spawns a process for ffmpeg


export class UnixBrowserRecorder implements Recorder {

 url = 'https://stackoverflow.com/questions/3143698/uncaught-syntaxerror-unexpected-token'; // Replace with your URL
 outputFilePath = `/app/output_video.mp4`; // Output file path within the container
 durationInSeconds = 6; // Duration of the video in seconds
 resolution = '1280x720';

 public async capture(): Promise<string> {
 const browser = await puppeteer.launch({
 args: [
 '--no-sandbox', // Required in Docker
 '--disable-setuid-sandbox', // Required in Docker
 '--disable-dev-shm-usage', // Required in Docker
 '--headless', // Run browser in headless mode
 '--disable-gpu', // Disable GPU acceleration
 `--window-size=${this.resolution}` // Set window size
 ],
 executablePath: '/usr/bin/chromium' // Specify the path to Chromium executable
 });

 const page = await browser.newPage();
 await page.goto(this.url);

 await page.screenshot({
 "type": "png", // can also be "jpeg" or "webp" (recommended)
 "path": `/app/screenshot.png`, // where to save it
 "fullPage": true, // will scroll down to capture everything if true
 });

 //ffmpeg -video_size `DISPLAY=:5 xdpyinfo | grep 'dimensions:'|awk '{print $2}'` -framerate 30 -f x11grab -i :5.0+0,0 output.mpg

 const ffmpegProcess = spawn('ffmpeg', [
 '-video_size', this.resolution,
 '-framerate', '30',
 '-f', 'x11grab',
 '-i', ':99', // Use display :99 (assuming Xvfb is running on this display)
 '-t', this.durationInSeconds.toString(),
 '-c:v', 'libx264',
 '-loglevel', 'debug',
 '-pix_fmt', 'yuv420p',
 this.outputFilePath
 ]);

 // Log ffmpeg output
 ffmpegProcess.stdout.on('data', data => {
 console.log(`ffmpegProcess stdout: ${data}`);
 });

 ffmpegProcess.stderr.on('data', data => {
 console.error(`ffmpegProcess stderr: ${data}`);
 });

 // Handle ffmpegProcess process exit
 ffmpegProcess.on('close', code => {
 console.log(`ffmpeg process exited with code ${code}`);
 });

 // Wait for the duration to complete
 await new Promise(resolve => setTimeout(resolve, this.durationInSeconds * 1000));

 // Close the FFmpeg stream and process
 ffmpegProcess.stdin.end();
 // Close Puppeteer
 await page.close();
 await browser.close();

 return "Video generated successfully";
 }
}
</string>




-
Revision 3331 : On ajoute un accès à la configuration dans les menus
25 avril 2010, par kent1 — LogOn ajoute un accès à la configuration dans les menus
-
iPhone camera shooting video using the AVCaptureSession and using ffmpeg CMSampleBufferRef a change in h.264 format is the issue. please advice
4 janvier 2012, par isaiahMy goal is h.264/AAC , mpeg2-ts streaming to server from iphone device.
Current my source is FFmpeg+libx264 compile success. I Know gnu License. I want the demo program.
I'm want to know that
1.CMSampleBufferRef to AVPicture data is success ?
avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);
pFrame linesize and data is not null but pst -9233123123 . outpic also .
Because of this I have to guess 'non-strictly-monotonic PTS' message2.This log is repeat.
encoding frame (size= 0)
encoding frame = "" , 'avcodec_encode_video' return 0 is success but always 0 .I don't know what to do...
2011-06-01 15:15:14.199 AVCam[1993:7303] pFrame = avcodec_alloc_frame();
2011-06-01 15:15:14.207 AVCam[1993:7303] avpicture_fill = 1228800
Video encoding
2011-0601 15:5:14.215 AVCam[1993:7303] codec = 5841844
[libx264 @ 0x1441e00] using cpu capabilities: ARMv6 NEON
[libx264 @ 0x1441e00] profile Constrained Baseline, level 2.0[libx264 @ 0x1441e00] non-strictly-monotonic PTS
encoding frame (size= 0)
encoding frame
[libx264 @ 0x1441e00] final ratefactor: 26.743.I have to guess 'non-strictly-monotonic PTS' message is the cause of all problems.
what is this 'non-strictly-monotonic PTS' .this is source
(void) captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
if( !CMSampleBufferDataIsReady(sampleBuffer) )
{
NSLog( @"sample buffer is not ready. Skipping sample" );
return;
}
if( [isRecordingNow isEqualToString:@"YES"] )
{
lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if( videoWriter.status != AVAssetWriterStatusWriting )
{
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:lastSampleTime];
}
if( captureOutput == videooutput )
{
[self newVideoSample:sampleBuffer];
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
// access the data
int width = CVPixelBufferGetWidth(pixelBuffer);
int height = CVPixelBufferGetHeight(pixelBuffer);
unsigned char *rawPixelBase = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);
AVFrame *pFrame;
pFrame = avcodec_alloc_frame();
pFrame->quality = 0;
NSLog(@"pFrame = avcodec_alloc_frame(); ");
// int bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
// int bytesSize = height * bytesPerRow ;
// unsigned char *pixel = (unsigned char*)malloc(bytesSize);
// unsigned char *rowBase = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
// memcpy (pixel, rowBase, bytesSize);
int avpicture_fillNum = avpicture_fill((AVPicture*)pFrame, rawPixelBase, PIX_FMT_RGB32, width, height);//PIX_FMT_RGB32//PIX_FMT_RGB8
//NSLog(@"rawPixelBase = %i , rawPixelBase -s = %s",rawPixelBase, rawPixelBase);
NSLog(@"avpicture_fill = %i",avpicture_fillNum);
//NSLog(@"width = %i,height = %i",width, height);
// Do something with the raw pixels here
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
//avcodec_init();
//avdevice_register_all();
av_register_all();
AVCodec *codec;
AVCodecContext *c= NULL;
int out_size, size, outbuf_size;
//FILE *f;
uint8_t *outbuf;
printf("Video encoding\n");
/* find the mpeg video encoder */
codec =avcodec_find_encoder(CODEC_ID_H264);//avcodec_find_encoder_by_name("libx264"); //avcodec_find_encoder(CODEC_ID_H264);//CODEC_ID_H264);
NSLog(@"codec = %i",codec);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c= avcodec_alloc_context();
/* put sample parameters */
c->bit_rate = 400000;
c->bit_rate_tolerance = 10;
c->me_method = 2;
/* resolution must be a multiple of two */
c->width = 352;//width;//352;
c->height = 288;//height;//288;
/* frames per second */
c->time_base= (AVRational){1,25};
c->gop_size = 10;//25; /* emit one intra frame every ten frames */
//c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
c ->me_range = 16;
c ->max_qdiff = 4;
c ->qmin = 10;
c ->qmax = 51;
c ->qcompress = 0.6f;
/* open it */
if (avcodec_open(c, codec) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
/* alloc image and output buffer */
outbuf_size = 100000;
outbuf = malloc(outbuf_size);
size = c->width * c->height;
AVFrame* outpic = avcodec_alloc_frame();
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
//create buffer for the output image
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
#pragma mark -
fflush(stdout);
<pre>// int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
// uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
//
// //UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"10%d", i]];
// CGImageRef newCgImage = [self imageFromSampleBuffer:sampleBuffer];//[image CGImage];
//
// CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
// CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
// buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);
//
// avpicture_fill((AVPicture*)pFrame, buffer, PIX_FMT_RGB8, c->width, c->height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);
struct SwsContext* fooContext = sws_getContext(c->width, c->height,
PIX_FMT_RGB8,
c->width, c->height,
PIX_FMT_YUV420P,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
//perform the conversion
sws_scale(fooContext, pFrame->data, pFrame->linesize, 0, c->height, outpic->data, outpic->linesize);
// Here is where I try to convert to YUV
/* encode the image */
out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
printf("encoding frame (size=%5d)\n", out_size);
printf("encoding frame %s\n", outbuf);
//fwrite(outbuf, 1, out_size, f);
// free(buffer);
// buffer = NULL;
/* add sequence end code to have a real mpeg file */
// outbuf[0] = 0x00;
// outbuf[1] = 0x00;
// outbuf[2] = 0x01;
// outbuf[3] = 0xb7;
//fwrite(outbuf, 1, 4, f);
//fclose(f);
free(outbuf);
avcodec_close(c);
av_free(c);
av_free(pFrame);
printf("\n");
</pre>