
Recherche avancée
Médias (1)
-
GetID3 - Bloc informations de fichiers
9 avril 2013, par
Mis à jour : Mai 2013
Langue : français
Type : Image
Autres articles (46)
-
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...) -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ;
Sur d’autres sites (6980)
-
libavformat stuck in ff_network_wait_fd
3 juin 2020, par DanielI'm using libavformat for remuxing some live video feeds (rtsp).



I can't really create a minimal reproducable example, because the issue is not reproducable, but I have an attached debugger and if I have any minimal chance to examine the root cause I'd not skip any chance.



The stream is opened via avformat_open_input, and there is a custom IO for writing the output (avformat_alloc_output_context2).



The problem is that avformat is stuck in ff_network_wait_fd, at least by gdb :



#0 0xb673e120 in poll () at ../sysdeps/unix/syscall-template.S:84
#1 0x004a3ff4 in ff_network_wait_fd (fd=-1357954584, write=1) at libavformat/network.c:72
#2 0xaf0f428e in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)




There was a network failure during the connection or the stream and I have not got any aggressive stream closing mechanism (everything is non blocking, there is no waits, sleeps, everything is event-based by libev).



This just happened once during one month so it's very hard to reproduce but it is still running and gdb is attached.



I'm curious if I have any chance to dig deeper. The stack seems corrupted and also the fd seems invalid for what ff_network_wait_fd waits for.



Also, I never call ff_network_wait_fd directly, but there is an alternative : ff_network_wait_fd_timeout. Is it possible to ask libavformat to use this alternative so it won't block my thread if there is unreliable network ?


-
FFmpegFrameGrabber and FFmpegFrameRecorder Audio Issue
20 août 2015, par Sheheryar ChaganiI am compress an existing camera recorded video using FFmpegframerecorder and ffmpegFrameGrabber.
The issue is that its audio is not occurring after compression.
Please note that I am using googlecode.javacv along with javacpp and armeabi in lib folder.
Below is the code which I have used.
public void compressVideo(String filePath)
FrameGrabber grabber = new FFmpegFrameGrabber(filePath);
grabber.start();
fileoutput = filePath.replace("trimmed", "compressed");
// recorder.setAudioCodec(grabber.get);
FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(fileoutput, 480,
480, grabber.getAudioChannels());
recorder.setFrameRate(grabber.getFrameRate());
recorder.setSampleRate(grabber.getSampleRate());
recorder.setSampleFormat(grabber.getSampleFormat());
recorder.setFormat(grabber.getFormat());
// recorder.setPixelFormat(grabber.getPixelFormat());
recorder.start();
Frame frame;
int count = 0;
while ((frame = grabber.grabFrame()) != null) {
if (frame.image != null) {
publishProgress(count);
count++;
IplImage rotateImage = rotate(frame.image, 90);
IplImage cropImage = resizeImage(rotateImage, 480, 480, true);
frame.image = cropImage;
recorder.record(frame);
if (rotateImage != null)
opencv_core.cvReleaseImage(rotateImage);
if (cropImage != null)
opencv_core.cvReleaseImage(cropImage);
} else {
recorder.record(frame);
}
}
recorder.stop();
grabber.stop();
recorder.release();
grabber.release();
}
IplImage resizeImage(IplImage origImg, int newWidth, int newHeight,
boolean keepAspectRatio) {
IplImage outImg;
int origWidth = 0;
int origHeight = 0;
if (origImg != null) {
origWidth = origImg.width();
origHeight = origImg.height();
}
if (newWidth <= 0 || newHeight <= 0 || origImg == null
|| origWidth <= 0 || origHeight <= 0) {
// cerr << "ERROR: Bad desired image size of " << newWidth
// << "x" << newHeight << " in resizeImage().\n";
return null;
}
if (keepAspectRatio) {
// Resize the image without changing its aspect ratio,
// by cropping off the edges and enlarging the middle section.
CvRect r;
// input aspect ratio
float origAspect = (origWidth / (float) origHeight);
// output aspect ratio
float newAspect = (newWidth / (float) newHeight);
// crop width to be origHeight * newAspect
if (origAspect > newAspect) {
int tw = (origHeight * newWidth) / newHeight;
// System.out.println((origWidth - tw) / 2+" "+)
r = opencv_core.cvRect((origWidth - tw) / 2, 0, tw, origHeight);
} else { // crop height to be origWidth / newAspect
int th = (origWidth * newHeight) / newWidth;
r = opencv_core.cvRect(0, (origHeight - th) / 2, origWidth, th);
}
IplImage croppedImg = cropImage(origImg, r);
// Call this function again, with the new aspect ratio image.
// Will do a scaled image resize with the correct aspect ratio.
outImg = resizeImage(croppedImg, newWidth, newHeight, false);
opencv_core.cvReleaseImage(croppedImg);
} else {
// Scale the image to the new dimensions,
// even if the aspect ratio will be changed.
outImg = opencv_core.cvCreateImage(
opencv_core.cvSize(newWidth, newHeight), origImg.depth(),
origImg.nChannels());
if (newWidth > origImg.width() && newHeight > origImg.height()) {
// Make the image larger
opencv_core.cvResetImageROI((IplImage) origImg);
// CV_INTER_LINEAR: good at enlarging.
// CV_INTER_CUBIC: good at enlarging.
cvResize(origImg, outImg, CV_INTER_LINEAR);
} else {
// Make the image smaller
opencv_core.cvResetImageROI((IplImage) origImg);
// CV_INTER_AREA: good at shrinking (decimation) only.
cvResize(origImg, outImg, CV_INTER_AREA);
}
}
return outImg;
}
// Returns a new image that is a cropped version (rectangular cut-out)
// of the original image.
IplImage cropImage(IplImage img, CvRect region) {
IplImage imageCropped;
opencv_core.CvSize size = new CvSize();
if (img.width() <= 0 || img.height() <= 0 || region.width() <= 0
|| region.height() <= 0) {
// cerr << "ERROR in cropImage(): invalid dimensions." << endl;
return null;
}
if (img.depth() != opencv_core.IPL_DEPTH_8U) {
// cerr << "ERROR in cropImage(): image depth is not 8." << endl;
return null;
}
// Set the desired region of interest.
opencv_core.cvSetImageROI((IplImage) img, region);
// Copy region of interest into a new iplImage and return it.
size.width(region.width());
size.height(region.height());
imageCropped = opencv_core.cvCreateImage(size,
opencv_core.IPL_DEPTH_8U, img.nChannels());
opencv_core.cvCopy(img, imageCropped); // Copy just the region.
return imageCropped;
}
public IplImage rotate(IplImage image, double angle) {
IplImage copy = opencv_core.cvCloneImage(image);
IplImage rotatedImage = opencv_core.cvCreateImage(
opencv_core.cvGetSize(copy), copy.depth(), copy.nChannels());
CvMat mapMatrix = opencv_core.cvCreateMat(2, 3, opencv_core.CV_32FC1);
// Define Mid Point
CvPoint2D32f centerPoint = new CvPoint2D32f();
centerPoint.x(copy.width() / 2);
centerPoint.y(copy.height() / 2);
// Get Rotational Matrix
opencv_imgproc.cv2DRotationMatrix(centerPoint, angle, 1.0, mapMatrix);
// Rotate the Image
opencv_imgproc.cvWarpAffine(copy, rotatedImage, mapMatrix,
opencv_imgproc.CV_INTER_CUBIC
+ opencv_imgproc.CV_WARP_FILL_OUTLIERS,
opencv_core.cvScalarAll(170));
opencv_core.cvReleaseImage(copy);
opencv_core.cvReleaseMat(mapMatrix);
return rotatedImage;
}I am rotating the video frame and then resizing the frame image.
The code was working fine 3 days ago but not suddenly it started messing up.
-
How to Convert 16:9 Video to 9:16 Ratio While Ensuring Speaker Presence in Frame ?
28 avril 2024, par shreeshaI am tried so many time to figure out the problem in detecting the face and also it's not so smooth enough to like other tools out there.


So basically I am using python and Yolo in this project but I want the person who is talking and who the ROI (region of interest) is.


Here is the code :


from ultralytics import YOLO
from ultralytics.engine.results import Results
from moviepy.editor import VideoFileClip, concatenate_videoclips
from moviepy.video.fx.crop import crop

# Load the YOLOv8 model
model = YOLO("yolov8n.pt")

# Load the input video
clip = VideoFileClip("short_test.mp4")

tacked_clips = []

for frame_no, frame in enumerate(clip.iter_frames()):
 # Process the frame
 results: list[Results] = model(frame)

 # Get the bounding box of the main object
 if results[0].boxes:
 objects = results[0].boxes
 main_obj = max(
 objects, key=lambda x: x.conf
 ) # Assuming the first detected object is the main one

 x1, y1, x2, y2 = [int(val) for val in main_obj.xyxy[0].tolist()]

 # Calculate the crop region based on the object's position and the target aspect ratio
 w, h = clip.size
 new_w = int(h * 9 / 16)
 new_h = h

 x_center = x2 - x1
 y_center = y2 - y1

 # Adjust x_center and y_center if they would cause the crop region to exceed the bounds
 if x_center + (new_w / 2) > w:
 x_center -= x_center + (new_w / 2) - w
 elif x_center - (new_w / 2) < 0:
 x_center += abs(x_center - (new_w / 2))

 if y_center + (new_h / 2) > h:
 y_center -= y_center + (new_h / 2) - h
 elif y_center - (new_h / 2) < 0:
 y_center += abs(y_center - (new_h / 2))

 # Create a subclip for the current frame
 start_time = frame_no / clip.fps
 end_time = (frame_no + 1) / clip.fps
 subclip = clip.subclip(start_time, end_time)

 # Apply cropping using MoviePy
 cropped_clip = crop(
 subclip, x_center=x_center, y_center=y_center, width=new_w, height=new_h
 )

 tacked_clips.append(cropped_clip)

reframed_clip = concatenate_videoclips(tacked_clips, method="compose")
reframed_clip.write_videofile("output_video.mp4")



So basically I want to fix the face detection with ROI detection where it can detect the face and make that face and the body on to the frame and making sure that the speaker who is speaking is brought to the frame