
Recherche avancée
Médias (2)
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
Autres articles (112)
-
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)
Sur d’autres sites (14984)
-
Trim / Cut video on Android using FFMpeg's Copy
6 août 2012, par Kevin PWe're trying to replicate the functionality of this command line ffmpeg directive using the FFMpeg c api through JNI calls on Android.
ffmpeg -ss 2 -t 120 -vcodec copy -acodec copy -i input.file output.file
Basically, given a start and end time, we wish to copy (not re-encode) a small(er) segment of video from the larger (input) video source.
We've been using the wonderful JavaCv wrapper to openCv and FFMpeg, but we just cannot figure out how to do this simple bit of work. We've been scouring the ffmpeg.c and related sources and while I now understand that it switches to stream_copy and remuxing rather than re-encoding when the codec is specified as copy I cannot for the life of me identify what series of method calls to make to replicate this through the C api. Does anyone have an example JNI file for doing this ? Or are there rockstar C types that can explain how I get from that command line to api calls ? We've spent the better part of two weeks working on this (we're not native C guys) and we're at the point where we just need to ship some code. Any example code, especially JNI code or method call maps etc. would be greatly appreciated !
-
Black screen when playing a video with ffmpeg and SDL on iOS
1er avril 2012, par patrickI'm attempting to create a video player on iOS using ffmpeg and SDL. I'm decoding the video stream and attempting to convert the pixel data into a SDL_Surface and then convert that over to an SDL_Texture and render it on screen. However, all I'm getting is a black screen. I know the video file is good and can be viewed fine from VLC. Any idea what I'm missing here ?
Initialization code :
// initialize SDL (Simple DirectMedia Layer) to playback the content
if( SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER) )
{
DDLogError(@"Unable to initialize SDL");
return NO;
}
// create window and renderer
window = SDL_CreateWindow(NULL, 0, 0, SCREEN_WIDTH, SCREEN_HEIGHT,
SDL_WINDOW_OPENGL | SDL_WINDOW_BORDERLESS |
SDL_WINDOW_SHOWN);
if ( window == 0 )
{
DDLogError(@"Unable to initialize SDL Window");
}
renderer = SDL_CreateRenderer(window, -1, 0);
if ( !renderer )
{
DDLogError(@"Unable to initialize SDL Renderer");
}
// Initialize the FFMpeg and register codecs and their respected file formats
av_register_all();Playback code :
AVFormatContext *formatContext = NULL ;DDLogInfo(@"Opening media file at location:%@", filePath);
const char *filename = [filePath cStringUsingEncoding:NSUTF8StringEncoding];
// Open media file
if( avformat_open_input(&formatContext, filename, NULL, NULL) != 0 )
{
DDLogWarn(@"Unable to open media file. [File:%@]", filePath);
NSString *failureReason = NSLocalizedString(@"Unable to open file.", @"Media playback failed, unable to open file.");
if ( error != NULL )
{
*error = [NSError errorWithDomain:MediaPlayerErrorDomain
code:UNABLE_TO_OPEN
userInfo:[NSDictionary dictionaryWithObject:failureReason
forKey:NSLocalizedFailureReasonErrorKey]];
}
return NO; // Couldn't open file
}
// Retrieve stream information
if( avformat_find_stream_info(formatContext, NULL) <= 0 )
{
DDLogWarn(@"Unable to locate stream information for file. [File:%@]", filePath);
NSString *failureReason = NSLocalizedString(@"Unable to find audio/video stream information.", @"Media playback failed, unable to find stream information.");
if ( error != NULL )
{
*error = [NSError errorWithDomain:MediaPlayerErrorDomain
code:UNABLE_TO_FIND_STREAM
userInfo:[NSDictionary dictionaryWithObject:failureReason
forKey:NSLocalizedFailureReasonErrorKey]];
}
return NO; // Missing stream information
}
// Find the first video or audio stream
int videoStream = -1;
int audioStream = -1;
DDLogInfo(@"Locating stream information for media file");
for( int index=0; index<(formatContext->nb_streams); index++)
{
if( formatContext->streams[index]->codec->codec_type==AVMEDIA_TYPE_VIDEO )
{
DDLogInfo(@"Found video stream");
videoStream = index;
break;
}
else if( mediaType == AUDIO_FILE &&
(formatContext->streams[index]->codec->codec_type==AVMEDIA_TYPE_AUDIO) )
{
DDLogInfo(@"Found audio stream");
audioStream = index;
break;
}
}
if( videoStream == -1 && (audioStream == -1) )
{
DDLogWarn(@"Unable to find video or audio stream for file");
NSString *failureReason = NSLocalizedString(@"Unable to locate audio/video stream.", @"Media playback failed, unable to locate media stream.");
if ( error != NULL )
{
*error = [NSError errorWithDomain:MediaPlayerErrorDomain
code:UNABLE_TO_FIND_STREAM
userInfo:[NSDictionary dictionaryWithObject:failureReason
forKey:NSLocalizedFailureReasonErrorKey]];
}
return NO; // Didn't find a video or audio stream
}
// Get a pointer to the codec context for the video/audio stream
AVCodecContext *codecContext;
DDLogInfo(@"Attempting to locate the codec for the media file");
if ( videoStream > -1 )
{
codecContext = formatContext->streams[videoStream]->codec;
}
else
{
codecContext = formatContext->streams[audioStream]->codec;
}
// Now that we have information about the codec that the file is using,
// we need to actually open the codec to decode the content
DDLogInfo(@"Attempting to open the codec to playback the media file");
AVCodec *codec;
// Find the decoder for the video stream
codec = avcodec_find_decoder(codecContext->codec_id);
if( codec == NULL )
{
DDLogWarn(@"Unsupported codec! Cannot playback meda file [File:%@]", filePath);
NSString *failureReason = NSLocalizedString(@"Unsupported file format. Cannot playback media.", @"Media playback failed, unsupported codec.");
if ( error != NULL )
{
*error = [NSError errorWithDomain:MediaPlayerErrorDomain
code:UNSUPPORTED_CODEC
userInfo:[NSDictionary dictionaryWithObject:failureReason
forKey:NSLocalizedFailureReasonErrorKey]];
}
return NO; // Codec not found
}
// Open codec
if( avcodec_open2(codecContext, codec, NULL) < 0 )
{
DDLogWarn(@"Unable to open codec! Cannot playback meda file [File:%@]", filePath);
NSString *failureReason = NSLocalizedString(@"Unable to open media codec. Cannot playback media.", @"Media playback failed, cannot open codec.");
if ( error != NULL )
{
*error = [NSError errorWithDomain:MediaPlayerErrorDomain
code:UNABLE_TO_LOAD_CODEC
userInfo:[NSDictionary dictionaryWithObject:failureReason
forKey:NSLocalizedFailureReasonErrorKey]];
}
return NO; // Could not open codec
}
// Allocate player frame
AVFrame *playerFrame=avcodec_alloc_frame();
// Allocate an AVFrame structure
AVFrame *RGBframe=avcodec_alloc_frame();
if( RGBframe==NULL )
{
// could not create a frame to convert our video frame
// to a 16-bit RGB565 frame.
DDLogWarn(@"Unable to convert video frame. Cannot playback meda file [File:%@]", filePath);
NSString *failureReason = NSLocalizedString(@"Problems interpreting video frame information.", @"Media playback failed, cannot convert frame.");
if ( error != NULL )
{
*error = [NSError errorWithDomain:MediaPlayerErrorDomain
code:UNABLE_TO_LOAD_FRAME
userInfo:[NSDictionary dictionaryWithObject:failureReason
forKey:NSLocalizedFailureReasonErrorKey]];
}
return NO; // Could not open codec
}
int frameFinished = 0;
AVPacket packet;
// Figure out the destination width/height based on the screen size
int destHeight = codecContext->height;
int destWidth = codecContext->width;
if ( destHeight > SCREEN_HEIGHT || (destWidth > SCREEN_WIDTH) )
{
if ( destWidth > SCREEN_WIDTH )
{
float percentDiff = ( destWidth - SCREEN_WIDTH ) / (float)destWidth;
destWidth = destWidth - (int)(destWidth * percentDiff );
destHeight = destHeight - (int)(destHeight * percentDiff );
}
if ( destHeight > SCREEN_HEIGHT )
{
float percentDiff = (destHeight - SCREEN_HEIGHT ) / (float)destHeight;
destWidth = destWidth - (int)(destWidth * percentDiff );
destHeight = destHeight - (int)(destHeight * percentDiff );
}
}
SwsContext *swsContext = sws_getContext(codecContext->width, codecContext->height, codecContext->pix_fmt, destWidth, destHeight, PIX_FMT_RGB565, SWS_BICUBIC, NULL, NULL, NULL);
while( av_read_frame(formatContext, &packet) >= 0 )
{
// Is this a packet from the video stream?
if( packet.stream_index == videoStream )
{
// Decode video frame
avcodec_decode_video2(codecContext, playerFrame, &frameFinished, &packet);
// Did we get a video frame?
if( frameFinished != 0 )
{
// Convert the content over to RGB565 (16-bit RGB) to playback with SDL
uint8_t *dst[3];
int dstStride[3];
// Set the destination stride
for (int plane = 0; plane < 3; plane++)
{
dstStride[plane] = codecContext->width*2;
dst[plane]= (uint8_t*) malloc(dstStride[plane]*destHeight);
}
sws_scale(swsContext, playerFrame->data,
playerFrame->linesize, 0,
destHeight,
dst, dstStride);
// Create the SDL surface frame that we are going to use to draw our video
// 16-bit RGB so 2 bytes per pixel (pitch = width*(bytes per pixel))
int pitch = destWidth*2;
SDL_Surface *frameSurface = SDL_CreateRGBSurfaceFrom(dst[0], destWidth, destHeight, 16, pitch, 0, 0, 0, 0);
// Clear the old frame first
SDL_RenderClear(renderer);
// Move the frame over to a texture and render it on screen
SDL_Texture *texture = SDL_CreateTextureFromSurface(renderer, frameSurface);
SDL_SetTextureBlendMode(texture, SDL_BLENDMODE_BLEND);
// Draw the new frame on the screen
SDL_RenderPresent(renderer);
SDL_DestroyTexture(texture);
SDL_FreeSurface(frameSurface);
} -
Rails Streamio FFMPEG taking a screenshot of the movie and upload with carrierwave
5 juin 2016, par FelixI have got a Form where I can upload a movie. Its uploaded with carrierwave.
In this process I want to Make a screenshot of the movie while uploading.
How can I do this with Streamio FFMPEG.
My code Looks like this at the moment.
#Laedt ein Video hoch
def uploadMovie
@channels = Channel.all
@vid = Movie.new(movies_params)
@channel = Channel.find(params[:channel_id])
@vid.channel = @channel
if @vid.save
flash[:notice] = t("flash.saved")
render :add
else
render :add
end
endDo I have to do this in controller method or in the carrierwave uplaoder ?
Update : I tried it this way :
if @vid.save
flash[:notice] = t("flash.saved")
movieFile = FFMPEG::Movie.new(@vid.video.to_s)
screenshot = movieFile.screenshot("uploads/screenshot", :seek_time => 10)
render :add
elseBut then I got tis error :
s3.amazonaws.com/uploads/movie/video/6/2016-04-24_16.26.10.mp4' does not exist