
Advanced search
Medias (91)
-
Head down (wav version)
26 September 2011, by
Updated: April 2013
Language: English
Type: Audio
-
Echoplex (wav version)
26 September 2011, by
Updated: April 2013
Language: English
Type: Audio
-
Discipline (wav version)
26 September 2011, by
Updated: April 2013
Language: English
Type: Audio
-
Letting you (wav version)
26 September 2011, by
Updated: April 2013
Language: English
Type: Audio
-
1 000 000 (wav version)
26 September 2011, by
Updated: April 2013
Language: English
Type: Audio
-
999 999 (wav version)
26 September 2011, by
Updated: April 2013
Language: English
Type: Audio
Other articles (105)
-
Encoding and processing into web-friendly formats
13 April 2011, byMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 April 2011, byLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
Gestion de la ferme
2 March 2010, byLa ferme est gérée dans son ensemble par des "super admins".
Certains réglages peuvent être fais afin de réguler les besoins des différents canaux.
Dans un premier temps il utilise le plugin "Gestion de mutualisation"
On other websites (12572)
-
Concatenate mkv files keeping original timestmap with gaps
13 August 2020, by Filip16 days ago I had this problem: Concatenating mka files but keeping timestamp which I fixed by using amix, a delay by using
start_pts
from ffprobe.

Today I have a new challenge like this, but with video.


So I have a bunch of mkv videos. Each time a person joins a chat, a mkv is recorded, but if the person refreshes the page a new mkv is created with start_pts and start_time to what it actually is. Also if the meeting started and a person joins after a minute, the start_time is set to 1 minute. I need to merge all those mkv and pad them with blank screen when there is no feed.


Like in the above example, if a person joins after a minute, the first minute is a blank screen. Also if the participant leaves and re-joins after 10 seconds, those 10 seconds are blank again.


Any ideas on how to do that with ffmpeg?


Concrete example of files:


0PA84c5c3f412769b311d44b159941b2d22.mkv - start_pts: 742 start_time: 0.742000
2PA73d94e8cb0f41c3002fadd6c04b4a88f.mkv - start_pts: 30761 start_time: 30.761000
3PAcd35e470325618fa8a3fb8bb5a41403e.mkv - start_pts: 50940 start_time: 50.940000
4PAddccde7b8847ecc43d5e8643b7903dba.mkv - start_pts: 69243 start_time: 69.243000



The end file would result in a file with length 69.243000, first 0.742 seconds are blank and also the gaps between should also be blank.


So far i've tried:


ffmpeg -i 0PA84c5c3f412769b311d44b159941b2d22.mkv -i 2PA73d94e8cb0f41c3002fadd6c04b4a88f.mkv -i 3PAcd35e470325618fa8a3fb8bb5a41403e.mkv -i 4PAddccde7b8847ecc43d5e8643b7903dba.mkv -filter_complex "[0:v] [1:v] [2:v] [3:v] concat=n=4:v=1 [v]" -map "[v]" test.mkv


This works but without those gaps i mentioned.


-
How to stop a sound when certain other sound is inserted in the mix in ffmpeg?
3 April 2022, by Antonio OliveiraI'm using a ffmpeg command that takes a set of sounds, mixes them into a single file, separating them by certain time intervals.


Below is how my command is today.


ffmpeg -i 
close_hh.wav -i \
crash_l.wav -i \
crash_r.wav -i \
floor.wav -i \
kick_l.wav -i \
kick_r.wav -i \
open_hh.wav -i \
ride.wav -i \
snare.wav -i \
splash.wav -i \
tom_1.wav -i \
tom_2.wav -i \
 tom_3.wav -filter_complex " [6]adelay=0|0[note_0]; [0]adelay=360|360[note_1]; [6]adelay=1260|1260[note_2]; [0]adelay=1537|1537[note_3]; [6]adelay=2494|2494[note_4]; [5]adelay=2767|2767[note_5]; [0]adelay=2969|2969[note_6]; [6]adelay=3673|3673[note_7]; [5]adelay=3924|3924[note_8]; [0]adelay=4132|4132[note_9]; [0][note_0][note_1][note_2][note_3][note_4][note_5][note_6][note_7][note_8][note_9]amix=inputs=11:normalize=0" record.wav



This is the resulting audio that this command generates:


ffmpg record.wav: https://drive.google.com/file/d/1LFV4ImLKLnRCqZRhZ7OqZy4Ecq5fwT3j/view?usp=sharing


The purpose is to generate a drum recording, so I would like to simulate the dynamics of the hi-hat sounds: When the closed hi-hat is played, the open hi-hat will stop playing immediately if it is still sounding. The same behavior does not happen for any of the other sounds.


One point that makes this a little more challenging is that other sounds can also be played between open hi-hat and closed hi-hat strikes, and theoretically the sound interruption behavior should work normally.


Below is a recording demonstrating the expected result. (My app already reproduces the sound result I need internally, so I just made a simple recording with the microphone to illustrate)


mic record.wav https://drive.google.com/file/d/19x19Fd_URQVo-MMCmGEHIC1SjaQbpWrh/view?usp=sharing


Notice that in the first audio (ffmpeg record.wav) the first sound (open hi-hat) continues playing after the second is played.
In the second audio (mic record.wav) the first sound stops immediately after the second sound is played.


How should the ffmpeg command be to get the expected result?


-
React Native Expo File System: open failed: ENOENT (No such file or directory)
9 February 2023, by coloradayI'm getting this error in a bare React Native project:


Possible Unhandled Promise Rejection (id: 123):
Error: /data/user/0/com.filsufius.VisionishAItest/files/image-new-♥d.jpg: open failed: ENOENT (No such file or directory)



The same code was saving to File System with no problem yesterday, but today as you can see I am getting an ENOENT error, plus I am getting these funny heart shapes ♥d in the path. Any pointers as to what might be causing this, please? I use npx expo run:android to builld app locally and expo start —dev-client to run on a physical Android device connected through USB.


import { Image, View, Text, StyleSheet } from "react-native";
import * as FileSystem from "expo-file-system";
import RNFFmpeg from "react-native-ffmpeg";
import * as tf from "@tensorflow/tfjs";
import * as cocossd from "@tensorflow-models/coco-ssd";
import { decodeJpeg, bundleResourceIO } from "@tensorflow/tfjs-react-native";

const Record = () => {
 const [frames, setFrames] = useState([]);
 const [currentFrame, setCurrentFrame] = useState(0);
 const [model, setModel] = useState(null);
 const [detections, setDetections] = useState([]);

 useEffect(() => {
 const fileName = "image-new-%03d.jpg";
 const outputPath = FileSystem.documentDirectory + fileName;
 RNFFmpeg.execute(
 "-y -i https://res.cloudinary.com/dannykeane/video/upload/sp_full_hd/q_80:qmax_90,ac_none/v1/dk-memoji-dark.m3u8 -vf fps=25 -f mjpeg " +
 outputPath
 )
 .then((result) => {
 console.log("Extraction succeeded:", result);
 FileSystem.readDirectoryAsync(FileSystem.documentDirectory).then(
 (files) => {
 setFrames(
 files
 .filter((file) => file.endsWith(".jpg"))
 .sort((a, b) => {
 const aNum = parseInt(a.split("-")[2].split(".")[0]);
 const bNum = parseInt(b.split("-")[2].split(".")[0]);
 return aNum - bNum;
 })
 );
 }
 );
 })
 .catch((error) => {
 console.error("Extraction failed:", error);
 });
 }, []);

 useEffect(() => {
 tf.ready().then(() => cocossd.load().then((model) => setModel(model)));
 }, []);
 useEffect(() => {
 if (frames.length && model) {
 const intervalId = setInterval(async () => {
 setCurrentFrame((currentFrame) =>
 currentFrame === frames.length - 1 ? 0 : currentFrame + 1
 );
 const path = FileSystem.documentDirectory + frames[currentFrame];
 const imageAssetPath = await FileSystem.readAsStringAsync(path, {
 encoding: FileSystem.EncodingType.Base64,
 });
 const imgBuffer = tf.util.encodeString(imageAssetPath, "base64").buffer;
 const imageData = new Uint8Array(imgBuffer);
 const imageTensor = decodeJpeg(imageData, 3);
 console.log("after decodeJpeg.");
 const detections = await model.detect(imageTensor);
 console.log(detections);
 setDetections(detections);
 }, 100);
 return () => clearInterval(intervalId);
 }
 }, [frames, model]);

 
 return (
 <view style="{styles.container}">
 
 <view style="{styles.predictions}">
 {detections.map((p, i) => (
 <text key="{i}" style="{styles.text}">
 {p.class}: {(p.score * 100).toFixed(2)}%
 </text>
 ))}
 </view>
 </view>
 );
};

const styles = StyleSheet.create({
 container: {
 flex: 1,
 alignItems: "center",
 justifyContent: "center",
 },
 image: {
 width: 300,
 height: 300,
 resizeMode: "contain",
 },
 predictions: {
 width: 300,
 height: 100,
 marginTop: 20,
 },
 text: {
 fontSize: 14,
 textAlign: "center",
 },
});

export default Record;```