
Recherche avancée
Médias (1)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (66)
-
Gestion générale des documents
13 mai 2011, parMédiaSPIP ne modifie jamais le document original mis en ligne.
Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...)
Sur d’autres sites (7647)
-
H264 streamed video stutter and freeze with MediaCodec, Android 4.1.2
5 mars 2015, par WajihI have been trying my heart out to remove the stutter from an android RTSP client.
Here is my setup- FFMPEG server streams a live video on Win7. The video is 1200x900 in size. The video streamed is in H264 format.
- I receive the video packets on android (4.1.2) clinet under JNI which pushes the packet to java - Device is a Samsung Tab4
- Packets are decoded using MediaCodec. Once call from JNI to push the packets into MediaCodec, another thread in Java tries to de-queue the data and display them to a SurfaceView (its a GLSurfaceView)
Despite my efforts of using queue to buffer the packets, changing wait times to 0,-1, 1000000, i am unable to get a clean streamed video. I understand that there is some packet loss (1% to 10%), but I am getting a broken video, with stutter (some call it even jitter). Green patches, pink screens, gray slices. You name it, it is there, the problem seems to be exaggerated when there is a fast movement in the video.
At the moment I am not sure where the problem lies, I tried a windows version of the client (with ffmpeg decoding) and it works smoothly despite the packet loss.What am I doing wrong ? Any guidance is appreciated.
Below is the client end code for Android and the server end FFMPEG settings I read from a config file.// Function called from JNI
public int decodeVideo(byte[] data, int size, long presentationTimeUs, boolean rtpMarker, int flag)
{
if(vdecoder == null)
return -1;
if(currVInbufIdx == -1) {
vdecoderInbufIdx = vdecoder.dequeueInputBuffer(1000000); //1000000/*1s*/
if(vdecoderInbufIdx < 0) {
Log.d("log","decodeVideo@1: frame dropped");
vdecoderRet = -1;
return vdecoderRet;
}
currVInbufIdx = vdecoderInbufIdx;
currVPts = presentationTimeUs;
currVFlag = flag;
inputVBuffers[currVInbufIdx].clear();
}
vdecoderPos = inputVBuffers[currVInbufIdx].position();
vdecoderRemaining = inputVBuffers[currVInbufIdx].remaining();
if(flag==currVFlag && vdecoderRemaining >= size && currVPts == presentationTimeUs
&& rtpMarker == false
/*&&(pos < vbufferLevel || vbufferLevel<=0)*/)
{
/* Queue without decoding */
inputVBuffers[currVInbufIdx].put(data, 0,size);
}
else
{
if(flag==currVFlag && vdecoderRemaining >= size && currVPts == presentationTimeUs
&& rtpMarker)
{
inputVBuffers[currVInbufIdx].put(data, 0, size);
queued = true;
}
Log.d("log", "decodeVideo: submit,"
+ " pts=" + Long.toString(currVPts)
+ " position="+inputVBuffers[currVInbufIdx].position()
+ " capacity="+inputVBuffers[currVInbufIdx].capacity()
+ " VBIndex="+currVInbufIdx
);
vdecoder.queueInputBuffer(currVInbufIdx, 0, inputVBuffers[currVInbufIdx].position(), currVPts, currVFlag);
//
vdecoderInbufIdx = vdecoder.dequeueInputBuffer(1000000);//1000000/*1s*/
if(vdecoderInbufIdx >= 0)
{
currVInbufIdx = vdecoderInbufIdx;
currVPts = presentationTimeUs;
currVFlag = flag;
inputVBuffers[currVInbufIdx].clear();
//if(queued == false)
{
inputVBuffers[vdecoderInbufIdx].put(data, 0, size);
}
}
else
{
currVInbufIdx = -1;
currVPts = -1;
vdecoderRet = -1;
Log.d("log","decodeVideo@2: frame dropped");
}
}
return vdecoderRet;
}And here we have the thread that calls for a render
// Function at android. Called by a separate thread.
private void videoRendererThreadProc() {
if(bufinfo == null)
bufinfo = new MediaCodec.BufferInfo();
videoRendered = false;
Log.d("log", "videoRenderer started.");
while(!Thread.interrupted() && !quitVideoRenderer)
{
Log.d("log", "videoRendererThreadProc");
outbufIdx = vdecoder.dequeueOutputBuffer(bufinfo,1000000);//500000
switch (outbufIdx)
{
case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED:
Log.d("log", "decodeVideo: output buffers changed.");
// outputBuffers = vdecoder.getOutputBuffers();
break;
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
Log.d("log", "decodeVideo: format changed - " + vdecoder.getOutputFormat());
break;
case MediaCodec.INFO_TRY_AGAIN_LATER:
// Log.d("log", "decodeVideo: try again later.");
break;
default:
// decoded or rendered
videoRendered = true;
vdecoder.releaseOutputBuffer(outbufIdx, true);
//Log.d("log", "decodeVideo: Rendering...!!!.");
}
}
// flush decoder
//vdecoder.queueInputBuffer(0, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
outbufIdx = vdecoder.dequeueOutputBuffer(bufinfo, 1000000);//10000
if(outbufIdx >= 0)
{
vdecoder.releaseOutputBuffer(outbufIdx, true);
}
bufinfo = null;
videoRendered = false;
//
Log.d("log", "videoRenderer terminated.");
}And the ffmpeg setting at server at as follows.
[slices] = 4 # --slices
[threads] = 4 # --threads
[profile] = high # --profile main|baseline
[preset] = faster # --preset faster|ultrafast
[tune] = zerolatency # --tune -
How do you run a ffmpeg command in Java, in MacOS, using a ProcessBuilder
5 août 2020, par nottAbottI am writing a program in Java that uses ffmpeg to "snip" a video into several pieces and the stitch them back together again. I have everything working relatively smoothly in Windows, but I cannot get ffmpeg to work in Mac, or in Linux for that matter. I'm focusing on mac right now though. I thought that it might be a permissions problem, but when I run it with sudo I get an error that says (after typing in the password :


sudo: ffmpeg: command not found



when I run it without sudo I get :


java.io.IOException: Cannot run program "ffmpeg": error=2, No such file or directory



I think that it might be because the ffmpeg package, on the Mac machine, was downloaded with homebrew, and ffmpeg is stored in /usr/local/Cellar/ffmpeg instead of the default folder, wherever it may be. That may not be the problem though, because I deleted ffmpeg and re-downloaded it with homebrew. It may have been in its defaulter folder in my first tests as well. It would be great to figure this out. Most of my family uses Mac (not me) and I really want to share my work with them. That is why I chose to code this in Java. Oh, and I did try using the directory to the binary in the command. Here's the code :


//snips out all the clips from the main video
 public void snip() throws IOException, InterruptedException {
 
 for(int i = 0; i < snippets.size(); i++) {
 //ffmpeg -i 20sec.mp4 -ss 0:0:1 -to 0:0:5 -c copy foobar.mp4
 String newFile = "foobar" + String.valueOf(i) + ".mp4";
 
 //THIS WORKS
 if(OS.isWindows()) {
 ProcessBuilder processBuilder = new ProcessBuilder("ffmpeg", "-i", videoName, "-ss",
 snippets.get(i).getStartTime(), "-to", snippets.get(i).getEndTime(), newFile);
 
 Process process = processBuilder.inheritIO().start();
 process.waitFor();
 System.out.println("Win Snip " + i + "\n");
 }
 
 else if (OS.isMac()) {
 //FFMPEG LOCATION: /usr/local/Cellar/ffmpeg
 //THE ERROR: sudo: ffmpeg: command not found
 //ERROR W/OUT SUDO: java.io.IOException: Cannot run program "ffmpeg": error=2, No such file or directory
 ProcessBuilder processBuilder = new ProcessBuilder("sudo", "-S", "ffmpeg", "-f", videoName, "-ss",
 snippets.get(i).getStartTime(), "-to", snippets.get(i).getEndTime(), newFile);
 
 Process process = processBuilder.inheritIO().start();
 process.waitFor();
 System.out.println("Mac Snip " + i + "\n");
 }
 
 else if (OS.isUnix()) {
 System.out.println("Your operating system is not supported");
 //TODO
 //need to figure out if deb/red hat/whatever are different
 }
 
 else if (OS.isSolaris()) {
 System.out.println("Your operating system is not supported yet");
 //TODO probably won't do
 }
 
 else {
 System.out.println("Your operating system is not supported");
 }
 //add to the list of files to be concat later
 filesToStitch.add(newFile);
 filesToDelete.add(newFile);
 
 }
 //System.out.println(stitchFiles);
 }



-
Python code mutes whole video instead of sliding a song. What shall I do ?
16 juillet 2023, par Armed NunI am trying to separate a song into 4 parts and slide the parts in random parts of a video. The problem with my code is that the final output video is muted. I want to play parts of the song at random intervals and while the song is playing the original video shall be muted. Thanks to everyone who helps


import random
from moviepy.editor import *

def split_audio_into_parts(mp3_path, num_parts):
 audio = AudioFileClip(mp3_path)
 duration = audio.duration
 part_duration = duration / num_parts

 parts = []
 for i in range(num_parts):
 start_time = i * part_duration
 end_time = start_time + part_duration if i < num_parts - 1 else duration
 part = audio.subclip(start_time, end_time)
 parts.append(part)

 return parts

def split_video_into_segments(video_path, num_segments):
 video = VideoFileClip(video_path)
 duration = video.duration
 segment_duration = duration / num_segments

 segments = []
 for i in range(num_segments):
 start_time = i * segment_duration
 end_time = start_time + segment_duration if i < num_segments - 1 else duration
 segment = video.subclip(start_time, end_time)
 segments.append(segment)

 return segments

def insert_audio_into_segments(segments, audio_parts):
 modified_segments = []
 for segment, audio_part in zip(segments, audio_parts):
 audio_part = audio_part.volumex(0) # Mute the audio part
 modified_segment = segment.set_audio(audio_part)
 modified_segments.append(modified_segment)

 return modified_segments

def combine_segments(segments):
 final_video = concatenate_videoclips(segments)
 return final_video

# Example usage
mp3_file_path = "C:/Users/Kris/PycharmProjects/videoeditingscript124234/DENKATA - Podvodnica Demo (1).mp3"
video_file_path = "C:/Users/Kris/PycharmProjects/videoeditingscript124234/family.guy.s21e13.1080p.web.h264-cakes[eztv.re].mkv"
num_parts = 4

audio_parts = split_audio_into_parts(mp3_file_path, num_parts)
segments = split_video_into_segments(video_file_path, num_parts)
segments = insert_audio_into_segments(segments, audio_parts)
final_video = combine_segments(segments)
final_video.write_videofile("output.mp4", codec="libx264", audio_codec="aac")



I tried entering most stuff into chatGPT and asking questions around forums but without sucess, so lets hope I can see my solution here