
Recherche avancée
Médias (2)
-
Valkaama DVD Label
4 octobre 2011, par
Mis à jour : Février 2013
Langue : English
Type : Image
-
Podcasting Legal guide
16 mai 2011, par
Mis à jour : Mai 2011
Langue : English
Type : Texte
Autres articles (85)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras. -
Possibilité de déploiement en ferme
12 avril 2011, parMediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)
Sur d’autres sites (18436)
-
FFmpegKit in Flutter fails with configuration output on Android
15 décembre 2024, par ashvin jadavI am using
ffmpeg_kit_flutter
in my Flutter app to add a text overlay to a video. However, when I try to execute the FFmpeg command, I get an output similar to this :

configuration :
--cross-prefix=aarch64-linux-android- --sysroot=/Users/sue/Library/Android/sdk/ndk/22.1.7171670/toolchains/llvm/prebuilt/darwin-x86_64/sysroot --prefix=/Users/sue/Projects/arthenica/ffmpeg-kit/prebuilt/android-arm64/ffmpeg --pkg-config=/opt/homebrew/bin/pkg-config --enable-version3 --arch=aarch64 --cpu=armv8-a --target-os=android --enable-neon --enable-asm --enable-inline-asm --ar=aarch64-linux-android-ar --cc=aarch64-linux-android24-clang --cxx=aarch64-linux-android24-clang++ --ranlib=aarch64-linux-android-ranlib --strip=aarch64-linux-android-strip --nm=aarch64-linux-android-nm --extra-libs='-L/Users/sue/Projects/arthenica/ffmpeg-kit/prebuilt/android-arm64/cpu-features/lib -lndk_compat' --disable-autodetect --enable-cross-compile --enable-pic --enable...


Here is my code :


final command = '-i $_videoPath -vf "drawtext=text=\'Live Score: 10 - 5\':fontcolor=white:fontsize=24:x=10:y=10" $_processedVideoPath';

await FFmpegKit.execute(command).then((session) async {
 final returnCode = await session.getReturnCode();
 final output = await session.getOutput();
 final errorOutput = await session.getFailStackTrace();

 print('Return Code: $returnCode');
 print('Output: $output');
 print('Error Output: $errorOutput');
});



Dependencies :

ffmpeg_kit_flutter: ^6.0.3


-
How to get video frame for a specific time from mp4
11 décembre 2015, par man-ri have an mp4 video byte array and i need to generate a thumbnail for it using its middle frame (e.g. if the video length is 10 seconds then i need to get the picture from 5th second).
i managed to parse through the file and extract its boxes (atom). i have also managed to get the video length from the mvhd box. also i managed to extract
1. the time-To-Sample table from stts box,
2. the sample-To-Chunk table from stcs box,
3. the chunk-Offset table from stco box,
4. the sample Size table from stsz box,
5. the Sync Sample table from stss boxi know that all the actual media are available in the mdat box and that i need to correlate the above table to find the exact frame offset in the file but my question is how ? the tables data seems to be compressed (specially the time-To-Sample table) but i don’t know how decompress them.
any help is appreciated.
below are code samples
code to convert byte to hex
public static char[] bytesToHex(byte[] bytes) {
char[] hexChars = new char[bytes.length * 2];
for ( int j = 0; j < bytes.length; j++ ) {
int v = bytes[j] & 0xFF;
hexChars[j * 2] = hexArray[v >>> 4];
hexChars[j * 2 + 1] = hexArray[v & 0x0F];
}
return hexChars;
}code for getting the box offset
final static String MOOV = "6D6F6F76";
final static String MOOV_MVHD = "6D766864";
final static String MOOV_TRAK = "7472616B";
final static String MOOV_TRAK_MDIA = "6D646961";
final static String MOOV_TRAK_MDIA_MINF = "6D696E66";
final static String MOOV_TRAK_MDIA_MINF_STBL = "7374626C";
final static String MOOV_TRAK_MDIA_MINF_STBL_STSD = "73747364";
final static String MOOV_TRAK_MDIA_MINF_STBL_STTS = "73747473";
final static String MOOV_TRAK_MDIA_MINF_STBL_STSS = "73747373";
final static String MOOV_TRAK_MDIA_MINF_STBL_STSC = "73747363";
final static String MOOV_TRAK_MDIA_MINF_STBL_STCO = "7374636F";
final static String MOOV_TRAK_MDIA_MINF_STBL_STSZ = "7374737A";
static int getBox(char[] s, int offset, String type) {
int typeOffset = -1;
for (int i = offset*2; i-1) {
break;
}
}
i+=(size*2);
}
return typeOffset;
}code for getting the duration and timescale
static int[] getDuration(char[] s) {
int mvhdOffset = getBox(s, 0, MOOV_MVHD);
int timeScaleStart = (mvhdOffset*2) + (4 + 4 + 1 + 3 + 4 + 4)*2;
int timeScaleEnd = (mvhdOffset*2) + (4 + 4 + 1 + 3 + 4 + 4 + 4)*2;
int durationStart = (mvhdOffset*2) + (4 + 4 + 1 + 3 + 4 + 4 + 4)*2;
int durationEnd = (mvhdOffset*2) + (4 + 4 + 1 + 3 + 4 + 4 + 4 + 4)*2;
String timeScaleHex = new String(Arrays.copyOfRange(s, timeScaleStart, timeScaleEnd));
String durationHex = new String(Arrays.copyOfRange(s, durationStart, durationEnd));
int timeScale = Integer.parseInt(timeScaleHex, 16);
int duration = Integer.parseInt(durationHex, 16);
int[] result = {duration, timeScale};
return result;
}code to get the time-To-Sample table
static int[][] getTimeToSampleTable(char[] s, int trakOffset) {
int offset = getBox(s, trakOffset, MOOV_TRAK_MDIA_MINF_STBL_STTS);
int sizeStart = offset*2;
int sizeEnd = offset*2 + (4)*2;
int typeStart = offset*2 + (4)*2;
int typeEnd = offset*2 + (4 + 4)*2;
int noOfEntriesStart = offset*2 + (4 + 4 + 1 + 3)*2;
int noOfEntriesEnd = offset*2 + (4 + 4 + 1 + 3 + 4)*2;
String sizeHex = new String(Arrays.copyOfRange(s, sizeStart, sizeEnd));
String typeHex = new String(Arrays.copyOfRange(s, typeStart, typeEnd));
String noOfEntriesHex = new String(Arrays.copyOfRange(s, noOfEntriesStart, noOfEntriesEnd));
int size = Integer.parseInt(sizeHex, 16);
int noOfEntries = Integer.parseInt(noOfEntriesHex, 16);
int[][] timeToSampleTable = new int[noOfEntries][2];
for (int i = 0; icode> -
How to to add additional metadata to individual frames, DDB's, when creating an AVI file with ffmpeg
6 décembre 2019, par Totte KarlssonI’m creating avi videos from device dependent bitmaps, DDB’s.
The pipeline is quite simple, a GigE camera provides frame by frame, and each frame, a DDB, is piped to a ffmpeg process creating a final AVI file, using h264 compression.
These videos are scientific in nature, and we would like to store/embed experimental hardware information, such as the states of a few digital lines, with each frame.
This information need to be available in the final avi videoQuestion is, is this possible ?
Looking at this : https://docs.microsoft.com/en-us/windows/win32/api/wingdi/ns-wingdi-bitmap it does not seem that adding additional data to the DDB themselves is possible, but I’m not sure.