
Recherche avancée
Médias (91)
-
Les Miserables
9 décembre 2019, par
Mis à jour : Décembre 2019
Langue : français
Type : Textuel
-
VideoHandle
8 novembre 2019, par
Mis à jour : Novembre 2019
Langue : français
Type : Video
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
-
Un test - mauritanie
3 avril 2014, par
Mis à jour : Avril 2014
Langue : français
Type : Textuel
-
Pourquoi Obama lit il mes mails ?
4 février 2014, par
Mis à jour : Février 2014
Langue : français
-
IMG 0222
6 octobre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Image
Autres articles (31)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Supporting all media types
13 avril 2011, parUnlike most software and media-sharing platforms, MediaSPIP aims to manage as many different media types as possible. The following are just a few examples from an ever-expanding list of supported formats : images : png, gif, jpg, bmp and more audio : MP3, Ogg, Wav and more video : AVI, MP4, OGV, mpg, mov, wmv and more text, code and other data : OpenOffice, Microsoft Office (Word, PowerPoint, Excel), web (html, CSS), LaTeX, Google Earth and (...)
-
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (4458)
-
Black screen when playing a video with ffmpeg and SDL on iOS
1er avril 2012, par patrickI'm attempting to create a video player on iOS using ffmpeg and SDL. I'm decoding the video stream and attempting to convert the pixel data into a SDL_Surface and then convert that over to an SDL_Texture and render it on screen. However, all I'm getting is a black screen. I know the video file is good and can be viewed fine from VLC. Any idea what I'm missing here ?
Initialization code :
// initialize SDL (Simple DirectMedia Layer) to playback the content
if( SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER) )
{
DDLogError(@"Unable to initialize SDL");
return NO;
}
// create window and renderer
window = SDL_CreateWindow(NULL, 0, 0, SCREEN_WIDTH, SCREEN_HEIGHT,
SDL_WINDOW_OPENGL | SDL_WINDOW_BORDERLESS |
SDL_WINDOW_SHOWN);
if ( window == 0 )
{
DDLogError(@"Unable to initialize SDL Window");
}
renderer = SDL_CreateRenderer(window, -1, 0);
if ( !renderer )
{
DDLogError(@"Unable to initialize SDL Renderer");
}
// Initialize the FFMpeg and register codecs and their respected file formats
av_register_all();Playback code :
AVFormatContext *formatContext = NULL ;DDLogInfo(@"Opening media file at location:%@", filePath);
const char *filename = [filePath cStringUsingEncoding:NSUTF8StringEncoding];
// Open media file
if( avformat_open_input(&formatContext, filename, NULL, NULL) != 0 )
{
DDLogWarn(@"Unable to open media file. [File:%@]", filePath);
NSString *failureReason = NSLocalizedString(@"Unable to open file.", @"Media playback failed, unable to open file.");
if ( error != NULL )
{
*error = [NSError errorWithDomain:MediaPlayerErrorDomain
code:UNABLE_TO_OPEN
userInfo:[NSDictionary dictionaryWithObject:failureReason
forKey:NSLocalizedFailureReasonErrorKey]];
}
return NO; // Couldn't open file
}
// Retrieve stream information
if( avformat_find_stream_info(formatContext, NULL) <= 0 )
{
DDLogWarn(@"Unable to locate stream information for file. [File:%@]", filePath);
NSString *failureReason = NSLocalizedString(@"Unable to find audio/video stream information.", @"Media playback failed, unable to find stream information.");
if ( error != NULL )
{
*error = [NSError errorWithDomain:MediaPlayerErrorDomain
code:UNABLE_TO_FIND_STREAM
userInfo:[NSDictionary dictionaryWithObject:failureReason
forKey:NSLocalizedFailureReasonErrorKey]];
}
return NO; // Missing stream information
}
// Find the first video or audio stream
int videoStream = -1;
int audioStream = -1;
DDLogInfo(@"Locating stream information for media file");
for( int index=0; index<(formatContext->nb_streams); index++)
{
if( formatContext->streams[index]->codec->codec_type==AVMEDIA_TYPE_VIDEO )
{
DDLogInfo(@"Found video stream");
videoStream = index;
break;
}
else if( mediaType == AUDIO_FILE &&
(formatContext->streams[index]->codec->codec_type==AVMEDIA_TYPE_AUDIO) )
{
DDLogInfo(@"Found audio stream");
audioStream = index;
break;
}
}
if( videoStream == -1 && (audioStream == -1) )
{
DDLogWarn(@"Unable to find video or audio stream for file");
NSString *failureReason = NSLocalizedString(@"Unable to locate audio/video stream.", @"Media playback failed, unable to locate media stream.");
if ( error != NULL )
{
*error = [NSError errorWithDomain:MediaPlayerErrorDomain
code:UNABLE_TO_FIND_STREAM
userInfo:[NSDictionary dictionaryWithObject:failureReason
forKey:NSLocalizedFailureReasonErrorKey]];
}
return NO; // Didn't find a video or audio stream
}
// Get a pointer to the codec context for the video/audio stream
AVCodecContext *codecContext;
DDLogInfo(@"Attempting to locate the codec for the media file");
if ( videoStream > -1 )
{
codecContext = formatContext->streams[videoStream]->codec;
}
else
{
codecContext = formatContext->streams[audioStream]->codec;
}
// Now that we have information about the codec that the file is using,
// we need to actually open the codec to decode the content
DDLogInfo(@"Attempting to open the codec to playback the media file");
AVCodec *codec;
// Find the decoder for the video stream
codec = avcodec_find_decoder(codecContext->codec_id);
if( codec == NULL )
{
DDLogWarn(@"Unsupported codec! Cannot playback meda file [File:%@]", filePath);
NSString *failureReason = NSLocalizedString(@"Unsupported file format. Cannot playback media.", @"Media playback failed, unsupported codec.");
if ( error != NULL )
{
*error = [NSError errorWithDomain:MediaPlayerErrorDomain
code:UNSUPPORTED_CODEC
userInfo:[NSDictionary dictionaryWithObject:failureReason
forKey:NSLocalizedFailureReasonErrorKey]];
}
return NO; // Codec not found
}
// Open codec
if( avcodec_open2(codecContext, codec, NULL) < 0 )
{
DDLogWarn(@"Unable to open codec! Cannot playback meda file [File:%@]", filePath);
NSString *failureReason = NSLocalizedString(@"Unable to open media codec. Cannot playback media.", @"Media playback failed, cannot open codec.");
if ( error != NULL )
{
*error = [NSError errorWithDomain:MediaPlayerErrorDomain
code:UNABLE_TO_LOAD_CODEC
userInfo:[NSDictionary dictionaryWithObject:failureReason
forKey:NSLocalizedFailureReasonErrorKey]];
}
return NO; // Could not open codec
}
// Allocate player frame
AVFrame *playerFrame=avcodec_alloc_frame();
// Allocate an AVFrame structure
AVFrame *RGBframe=avcodec_alloc_frame();
if( RGBframe==NULL )
{
// could not create a frame to convert our video frame
// to a 16-bit RGB565 frame.
DDLogWarn(@"Unable to convert video frame. Cannot playback meda file [File:%@]", filePath);
NSString *failureReason = NSLocalizedString(@"Problems interpreting video frame information.", @"Media playback failed, cannot convert frame.");
if ( error != NULL )
{
*error = [NSError errorWithDomain:MediaPlayerErrorDomain
code:UNABLE_TO_LOAD_FRAME
userInfo:[NSDictionary dictionaryWithObject:failureReason
forKey:NSLocalizedFailureReasonErrorKey]];
}
return NO; // Could not open codec
}
int frameFinished = 0;
AVPacket packet;
// Figure out the destination width/height based on the screen size
int destHeight = codecContext->height;
int destWidth = codecContext->width;
if ( destHeight > SCREEN_HEIGHT || (destWidth > SCREEN_WIDTH) )
{
if ( destWidth > SCREEN_WIDTH )
{
float percentDiff = ( destWidth - SCREEN_WIDTH ) / (float)destWidth;
destWidth = destWidth - (int)(destWidth * percentDiff );
destHeight = destHeight - (int)(destHeight * percentDiff );
}
if ( destHeight > SCREEN_HEIGHT )
{
float percentDiff = (destHeight - SCREEN_HEIGHT ) / (float)destHeight;
destWidth = destWidth - (int)(destWidth * percentDiff );
destHeight = destHeight - (int)(destHeight * percentDiff );
}
}
SwsContext *swsContext = sws_getContext(codecContext->width, codecContext->height, codecContext->pix_fmt, destWidth, destHeight, PIX_FMT_RGB565, SWS_BICUBIC, NULL, NULL, NULL);
while( av_read_frame(formatContext, &packet) >= 0 )
{
// Is this a packet from the video stream?
if( packet.stream_index == videoStream )
{
// Decode video frame
avcodec_decode_video2(codecContext, playerFrame, &frameFinished, &packet);
// Did we get a video frame?
if( frameFinished != 0 )
{
// Convert the content over to RGB565 (16-bit RGB) to playback with SDL
uint8_t *dst[3];
int dstStride[3];
// Set the destination stride
for (int plane = 0; plane < 3; plane++)
{
dstStride[plane] = codecContext->width*2;
dst[plane]= (uint8_t*) malloc(dstStride[plane]*destHeight);
}
sws_scale(swsContext, playerFrame->data,
playerFrame->linesize, 0,
destHeight,
dst, dstStride);
// Create the SDL surface frame that we are going to use to draw our video
// 16-bit RGB so 2 bytes per pixel (pitch = width*(bytes per pixel))
int pitch = destWidth*2;
SDL_Surface *frameSurface = SDL_CreateRGBSurfaceFrom(dst[0], destWidth, destHeight, 16, pitch, 0, 0, 0, 0);
// Clear the old frame first
SDL_RenderClear(renderer);
// Move the frame over to a texture and render it on screen
SDL_Texture *texture = SDL_CreateTextureFromSurface(renderer, frameSurface);
SDL_SetTextureBlendMode(texture, SDL_BLENDMODE_BLEND);
// Draw the new frame on the screen
SDL_RenderPresent(renderer);
SDL_DestroyTexture(texture);
SDL_FreeSurface(frameSurface);
} -
Qt : How to create a .mp4 from a collection of QOpenGLFramebufferObjects or QImages or Glunit textures
27 septembre 2017, par ProgramistI am developing a Qt app for iOS, Android & OSX.
Situation :
I have anstd::vector
of QOpenGLFramebufferObjects. EachQOpenGLFramebufferObject
can of course provide its own QImage or aGLunit
texture by doing a takeTexture. So you can also say that I have a collection ofQImage
s orGLunit
textures.Problem :
Now, I want to create a.mp4
video file out of these which works at least on iOS, Android & OSX.How should I do this ? Any examples doing this with Qt ? Which classes in Qt should I be looking into ?
ffmpeg or GStreamer, whichever works with Qt. But I need to know how to pass these QImages or Glunit textures into the required component or API to create the video.
Should I use QVideoEncoderSettings to create the video ?
-
Suggestions on moving textures in system ram to gpu in DirectX ?
11 février 2014, par OddlyOrdinaryI've used FFMPEG to decode a 1080p 60fps video and have a pool of ready texture color information. Stored as just an uint8_t array. I've managed to create ID3D102D textures and update them with the color information, but my performance varies from 45-65fps.
For reference this is a test application where I'm attempting to draw the video on a mesh. There are no other objects being processed by DirectX or my application.
My original implementation involved me getting the pixel information from the pool of decoded video frames, using a simple Dynamic texture2d, mapping/memcopying/unmapping. The memcopy was very expensive at about 20% of my runtime.
I've changed to decoding the video straight to a pool of D3D10_USAGE_DYNAMIC/D3D10_CPU_ACCESS_WRITE textures, and I'm able to always have textures ready before each update loop. I then have a Texture2D applied to the mesh that I'm updating with
ID3D10Texture2D* decodedFrame = mDecoder->GetFrame();
if(decodedFrontFrame){
//ID3D10Device* device;
device->CopyResource(mTexture, decodedFrame );
}From my understanding CopyResource should be faster but I don't see a noticeable difference. My questions are, is there a better way ? Also for textures created with D3D10_USAGE_DYNAMIC, is there a way to tell DirectX that I intend to use it on the gpu the next frame ?
The last thing I can think of would be decoding to a D3D10_USAGE_DEFAULT, but I don't know how I would create it using existing pixel information in system ram. Suggestions would be greatly appreciated, Thanks !