
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (61)
-
Les autorisations surchargées par les plugins
27 avril 2010, parMediaspip core
autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs -
Encoding and processing into web-friendly formats
13 avril 2011, parMediaSPIP automatically converts uploaded files to internet-compatible formats.
Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
All uploaded files are stored online in their original format, so you can (...) -
Création définitive du canal
12 mars 2010, parLorsque votre demande est validée, vous pouvez alors procéder à la création proprement dite du canal. Chaque canal est un site à part entière placé sous votre responsabilité. Les administrateurs de la plateforme n’y ont aucun accès.
A la validation, vous recevez un email vous invitant donc à créer votre canal.
Pour ce faire il vous suffit de vous rendre à son adresse, dans notre exemple "http://votre_sous_domaine.mediaspip.net".
A ce moment là un mot de passe vous est demandé, il vous suffit d’y (...)
Sur d’autres sites (12441)
-
How to update a byte array in a method, without running it again ?
18 février 2016, par AR792I have a class(an
AsyncTask
) which does image processing and generates yuv bytes continously, at around 200ms interval.Now I send these yuv bytes to another method where the they are recorded using FFmpeg frame recorder :
public void recordYuvData() {
byte[] yuv = getNV21();
System.out.println(yuv.length + " returned yuv bytes ");
if (audioRecord == null || audioRecord.getRecordingState() != AudioRecord.RECORDSTATE_RECORDING) {
startTime = System.currentTimeMillis();
return;
}
if (RECORD_LENGTH > 0) {
int i = imagesIndex++ % images.length;
yuvimage = images[i];
timestamps[i] = 1000 * (System.currentTimeMillis() - startTime);
}
/* get video data */
if (yuvimage != null && recording) {
((ByteBuffer) yuvimage.image[0].position(0)).put(yuv);
if (RECORD_LENGTH <= 0) {
try {
long t = 1000 * (System.currentTimeMillis() - startTime);
if (t > recorder.getTimestamp()) {
recorder.setTimestamp(t);
}
recorder.record(yuvimage);
} catch (FFmpegFrameRecorder.Exception e) {
e.printStackTrace();
}
}
}
}This method ; recordYuvData() is initiated on button click.
-
If I initiate it only once , then only the initial image gets recorded, rest are not.
-
If I initiate this each time after the end of the image processing it records but leads to ’weird’ fps count of the video ; and finally this leads to application crash after sometime.
For above what I feel is, at the end of image processing a new instance of recordYuvData() is created without ending the previous one, accumulating many instances of recordYuvData(). [correct me if I am wrong]
So, how do I update ’ONLY’ yuv bytes in the method without running it again ?
Thanks....!
Edit :
On Click :
record.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
recordYuvdata();
startRecording();getNV21()
byte[] getNV21(Bitmap bitmap) {
int inputWidth = 1024;
int inputHeight = 640;
int[] argb = new int[inputWidth * inputHeight];
bitmap.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
System.out.println(argb.length + "@getpixels ");
byte[] yuv = new byte[inputWidth * inputHeight * 3 / 2];
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
return yuv;
}
void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
System.out.println(yuv420sp.length + " @encoding " + frameSize);
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
}
index++;
}
}
} -
-
How do I download the image with metadata in Python ? [closed]
19 octobre 2024, par Temp AccountI am downloading some images in Python from the Airtable API and am trying to make a slideshow with them using ffmpeg. I download the images :


urllib2.urlretrieve(img['url'], "output/images/image_"+str(i)+".jpeg")



However, when I run the following ffmpeg command


ffmpeg -framerate 4/60 -i output/images/image_%d.jpeg output/out.mp4



I get the following error :


ffmpeg version 6.1.1-3ubuntu5 Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 13 (Ubuntu 13.2.0-23ubuntu3)
 configuration: --prefix=/usr --extra-version=3ubuntu5 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-l
inux-gnu --arch=amd64 --enable-gpl --disable-stripping --disable-omx --enable-gnutls --enable-libaom --enable-libass --enable-libbs2b --enable
-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribi
di --enable-libglslang --enable-libgme --enable-libgsm --enable-libharfbuzz --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enab
le-libopenmpt --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheo
ra --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libx
vid --enable-libzimg --enable-openal --enable-opencl --enable-opengl --disable-sndio --enable-libvpl --disable-libmfx --enable-libdc1394 --ena
ble-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-ladspa --enable-libbluray --enable-libjack --enable-libpulse --enable-librabbitmq --enable-librist --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libx264 --enable-libzmq --enable-libzvbi --enab
le-lv2 --enable-sdl2 --enable-libplacebo --enable-librav1e --enable-pocketsphinx --enable-librsvg --enable-libjxl --enable-shared
 libavutil 58. 29.100 / 58. 29.100
 libavcodec 60. 31.102 / 60. 31.102
 libavformat 60. 16.100 / 60. 16.100
 libavdevice 60. 3.100 / 60. 3.100
 libavfilter 9. 12.100 / 9. 12.100
 libswscale 7. 5.100 / 7. 5.100
 libswresample 4. 12.100 / 4. 12.100
 libpostproc 57. 3.100 / 57. 3.100
[mjpeg @ 0x593283e5e3c0] bits 150 is invalid
[mjpeg @ 0x593283e5e3c0] bits 28 is invalid
[image2 @ 0x593283e5d380] Could not find codec parameters for stream 0 (Video: mjpeg (Lossless), none(bt470bg/unknown/unknown), lossless): uns
pecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, image2, from 'output/images/image_%d.jpeg':
 Duration: 00:01:00.00, start: 0.000000, bitrate: N/A
 Stream #0:0: Video: mjpeg (Lossless), none(bt470bg/unknown/unknown), lossless, 0.07 fps, 0.07 tbr, 0.07 tbn
File 'output/out.mp4' already exists. Overwrite? [y/N] y
Stream mapping:
 Stream #0:0 -> #0:0 (mjpeg (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[mjpeg @ 0x593283e5f180] mjpeg: unsupported coding type (cf)
[mjpeg @ 0x593283e5f180] mjpeg: unsupported coding type (c8)
[mjpeg @ 0x593283e5f180] bits 150 is invalid
[vist#0:0/mjpeg @ 0x593283e5f000] Error submitting packet to decoder: Invalid data found when processing input
[mjpeg @ 0x593283e5f180] bits 28 is invalid
[vist#0:0/mjpeg @ 0x593283e5f000] Error submitting packet to decoder: Invalid data found when processing input
[mjpeg @ 0x593283e5f180] mjpeg: unsupported coding type (ce)
[mjpeg @ 0x593283e5f180] mjpeg: unsupported coding type (c6)
[mjpeg @ 0x593283e5f180] unable to decode APP fields: Invalid data found when processing input
 Last message repeated 1 times
[vist#0:0/mjpeg @ 0x593283e5f000] Error submitting packet to decoder: Invalid data found when processing input
[mjpeg @ 0x593283e5f180] unable to decode APP fields: Invalid data found when processing input
[mjpeg @ 0x593283e5f180] invalid id 255
[vist#0:0/mjpeg @ 0x593283e5f000] Error submitting packet to decoder: Invalid data found when processing input
Cannot determine format of input stream 0:0 after EOF
Error marking filters as finished
Error while filtering: Invalid data found when processing input
[vist#0:0/mjpeg @ 0x593283e5f000] Decode error rate 1 exceeds maximum 0.666667
[out#0/mp4 @ 0x593283e603c0] Nothing was written into output file, because at least one of its streams received no packets.
frame= 0 fps=0.0 q=0.0 Lsize= 0kB time=N/A bitrate=N/A speed=N/A 
Conversion failed!




However, downloading the images in Chrome then creating the slideshow is successful. The images from Chrome have metadata of the filetype (JPEG), width and height. The images downloaded with Python have no metadata. How do I download that information so that my ffmpeg command will succeed ?


Thanks !


-
Adding Text to Video with ffmpeg in Flutter
9 mars 2024, par AimanI am trying to add text to a video using ffmpeg package but it returns an error i.e. Return Code 1
Here's my code.


final videoPath = _controller.file.path;
 final outputName = videoPath.hashCode.toString();

 final output =
 File('${(await getTemporaryDirectory()).path}/$outputName.mp4');

 String command =
 "-y -i $videoPath -filter_complex '[0]scale=540:-1[s];[s]drawtext=fontfile=/storage/emulated/0/Download/SuperDessert.ttf:text='MY_TEXT':fontsize=24:fontcolor=white:x=(w-text_w)/2:y=(h-text_h)/2' ${output.path}";

 final session = await FFmpegKit.execute(command);

 final returnCode = await session.getReturnCode();

 if (ReturnCode.isSuccess(returnCode)) {
 log("Success full text added");
 } else {
 log("Error adding text: ${await session.getFailStackTrace()}");
 }



I have tried changing the output directory, changed
fontfile
tofont