
Recherche avancée
Autres articles (96)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
ANNEXE : Les plugins utilisés spécifiquement pour la ferme
5 mars 2010, parLe site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)
Sur d’autres sites (11694)
-
Save image while face detection
13 octobre 2017, par hiduraHello I’ve this app that save an image for everytime my face move something similar to the IG stories my problem is that the cellphone get very slow and the app close suddenly because of allocation memory problems I want to know how i can do this without slowing down the cellphone and dont receive the error of closing.
The next code open the svg :
@TargetApi(Build.VERSION_CODES.LOLLIPOP)
private static Bitmap getBitmap(VectorDrawable vectorDrawable) {
Bitmap bitmap = Bitmap.createBitmap(vectorDrawable.getIntrinsicWidth(),
vectorDrawable.getIntrinsicHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
vectorDrawable.setBounds(0, 0, canvas.getWidth(), canvas.getHeight());
vectorDrawable.draw(canvas);
Log.e("", "getBitmap: 1");
return bitmap;
}
private static Bitmap getBitmap(Context context, int drawableId) {
Log.e("", "getBitmap: 2");
Drawable drawable = ContextCompat.getDrawable(context, drawableId);
if (drawable instanceof BitmapDrawable) {
return BitmapFactory.decodeResource(context.getResources(), drawableId);
} else if (drawable instanceof VectorDrawable) {
return getBitmap((VectorDrawable) drawable);
} else {
throw new IllegalArgumentException("unsupported drawable type");
}
}The next one draw the svg below the position of the face.
/**
* Draws the face annotations for position on the supplied canvas.
*/
@Override
public void draw(Canvas canvas) {
Face face = mFace;
if (face == null) {
return;
}
// Draws a circle at the position of the detected face, with the face's track id below.
float x = translateX(face.getPosition().x + face.getWidth() / 2);
float y = translateY(face.getPosition().y + face.getHeight() / 2);
canvas.drawCircle(x, y, FACE_POSITION_RADIUS, mFacePositionPaint);
canvas.drawText("id: " + mFaceId, x + ID_X_OFFSET, y + ID_Y_OFFSET, mIdPaint);
canvas.drawText("happiness: " + String.format("%.2f", face.getIsSmilingProbability()), x - ID_X_OFFSET, y - ID_Y_OFFSET, mIdPaint);
canvas.drawText("right eye: " + String.format("%.2f", face.getIsRightEyeOpenProbability()), x + ID_X_OFFSET * 2, y + ID_Y_OFFSET * 2, mIdPaint);
canvas.drawText("left eye: " + String.format("%.2f", face.getIsLeftEyeOpenProbability()), x - ID_X_OFFSET*2, y - ID_Y_OFFSET*2, mIdPaint);
// Draws a bounding box around the face.
float xOffset = scaleX(face.getWidth() / 2.0f);
float yOffset = scaleY(face.getHeight() / 2.0f);
float left = x - xOffset;
float top = y - yOffset;
float right = x + xOffset;
float bottom = y + yOffset;
//bitmap = BitmapFactory.decodeResource(getOverlay().getContext().getResources(), R.drawable.ic_shirt);
bitmap = getBitmap(getOverlay().getContext(), R.drawable.ic_tshirt);
float eyeX = left-400;
// for(Landmark l : face.getLandmarks()){
// if(l.getType() == Landmark.LEFT_EYE){
// eyeX = l.getPosition().x + bitmap.getWidth() / 2;
// }
// }
tshirt = Bitmap.createScaledBitmap(bitmap, (int) scaleX(bitmap.getWidth() / 2),
(int) scaleY(bitmap.getHeight()/2), false);
float top_shirt=(face.getPosition().y + face.getHeight())+200;
canvas.drawBitmap(tshirt, eyeX, top_shirt, new Paint());
Canvas myCanvas = new Canvas(tshirt);
myCanvas.drawBitmap(tshirt, eyeX, top_shirt, new Paint());
HashMap args = new HashMap<>();
args.put("tshirt", tshirt);
new saveImg(args).execute();
//canvas.drawRect(left, top, right, bottom, mBoxPaint);
}This one save the image on the cellphone.
package com.google.android.gms.samples.vision.face.facetracker;
import android.graphics.Bitmap;
import android.os.AsyncTask;
import android.os.Environment;
import java.io.File;
import java.io.FileOutputStream;
import java.util.ArrayList;
import java.util.HashMap;
/**
* Created by diegohidalgo on 10/12/17.
*/
public class saveImg extends AsyncTask {
Bitmap tshirt;
String name;
saveImg(HashMap args){
tshirt = (Bitmap)args.get("tshirt");
File file=new File(Environment.getExternalStorageDirectory() + "/facedetection/");
File[] list = file.listFiles();
int count = 0;
for (File f: list){
String name = f.getName();
if (name.endsWith(".png"))
count++;
}
name="img"+count+".png";
}
@Override
protected Void doInBackground(Void... args) {
File file = new File(Environment.getExternalStorageDirectory() + "/facedetection/"+ name);
try {
tshirt.compress(Bitmap.CompressFormat.PNG, 100, new FileOutputStream(file));
} catch (Exception e) {
e.printStackTrace();
}
System.gc();
tshirt.recycle();
tshirt= null;
return null;
}
protected void onPostExecute() {
}
}This is the mistake that give me before close the app.
10-13 10:38:17.526
8443-8443/com.google.android.gms.samples.vision.face.facetracker
W/art : Throwing OutOfMemoryError "Failed to allocate a 8916492 byte
allocation with 1111888 free bytes and 1085KB until OOM" 10-13
10:38:18.020
8443-8443/com.google.android.gms.samples.vision.face.facetracker
I/Process : Sending signal. PID : 8443 SIG : 9thanks in advance.
-
ffmpeg - Can't stably save rtsp stream to file
20 août 2022, par André LuísI'm trying to save the RTSP stream of this camera to file. However, at some point I'll get the following errors, which will stop the process.




video:331639kB audio:5199kB subtitle:0kB other streams:0kB global
headers:0kB muxing overhead : unknown




And a bunch of these, before the previous one :




Non-monotonous DTS in output stream 0:1 ; previous : 5439375, current :
5439283 ; changing to 5439376. This may result in incorrect timestamps
in the output file.




Here is the command I'm running :


ffmpeg "rtsp://192.168.0.100:554/cam/realmonitor?channel=1&subtype=0" -c copy -f segment -segment_time 00:01:00 -reset_timestamps 1 -segment_format mkv -strftime 1 /home/user/.camera/recordings/2022/8/19/%H_%M_%S.mkv



I've tried adding each of these parameters and they all together :


-fflags +autobsf+genpts -y -threads 1 -use_wallclock_as_timestamps 1 -dts_delta_threshold 0 -rtsp_transport tcp -i



ffprobe :


{
 "streams": [
 {
 "index": 0,
 "codec_name": "h264",
 "codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",
 "profile": "Main",
 "codec_type": "video",
 "codec_tag_string": "[0][0][0][0]",
 "codec_tag": "0x0000",
 "width": 1920,
 "height": 1080,
 "coded_width": 1920,
 "coded_height": 1080,
 "closed_captions": 0,
 "film_grain": 0,
 "has_b_frames": 0,
 "pix_fmt": "yuvj420p",
 "level": 40,
 "color_range": "pc",
 "chroma_location": "left",
 "field_order": "progressive",
 "refs": 1,
 "is_avc": "false",
 "nal_length_size": "0",
 "r_frame_rate": "100/1",
 "avg_frame_rate": "0/0",
 "time_base": "1/90000",
 "start_pts": 9000,
 "start_time": "0.100000",
 "bits_per_raw_sample": "8",
 "extradata_size": 27,
 "disposition": {
 "default": 0,
 "dub": 0,
 "original": 0,
 "comment": 0,
 "lyrics": 0,
 "karaoke": 0,
 "forced": 0,
 "hearing_impaired": 0,
 "visual_impaired": 0,
 "clean_effects": 0,
 "attached_pic": 0,
 "timed_thumbnails": 0,
 "captions": 0,
 "descriptions": 0,
 "metadata": 0,
 "dependent": 0,
 "still_image": 0
 }
 },
 {
 "index": 1,
 "codec_name": "aac",
 "codec_long_name": "AAC (Advanced Audio Coding)",
 "profile": "LC",
 "codec_type": "audio",
 "codec_tag_string": "[0][0][0][0]",
 "codec_tag": "0x0000",
 "sample_fmt": "fltp",
 "sample_rate": "16000",
 "channels": 1,
 "channel_layout": "mono",
 "bits_per_sample": 0,
 "r_frame_rate": "0/0",
 "avg_frame_rate": "0/0",
 "time_base": "1/16000",
 "start_pts": 0,
 "start_time": "0.000000",
 "extradata_size": 2,
 "disposition": {
 "default": 0,
 "dub": 0,
 "original": 0,
 "comment": 0,
 "lyrics": 0,
 "karaoke": 0,
 "forced": 0,
 "hearing_impaired": 0,
 "visual_impaired": 0,
 "clean_effects": 0,
 "attached_pic": 0,
 "timed_thumbnails": 0,
 "captions": 0,
 "descriptions": 0,
 "metadata": 0,
 "dependent": 0,
 "still_image": 0
 }
 }
 ]
}



I honestly don't know what else to do.


-
how to configure ffserver to save the incoming feed in a different file every 30 mins or so
29 mai 2014, par Muhammad AliI want to have a backlog of my camera video in files of 30 mins duration. What i’ve read so far from the internet about ffserver has allowed me to connect all my cameras to ffserver using ffmpeg and receive the video on another system using VLC or ffplay .
Now what I want is to store the camera video in a different file preferably named as time-stamp. And keep streaming the live video.
Once there is a list of 30 mins files. I’d like to have a playlist which can be opened in VLC player of files that have been saved and can be played.
sort of like a remote media player with video sources coming from ffserver with playlist and 30 mins duration video clips.