jni - Underlying technique of Android's FaceDetector -
I am implementing a face tracker on Android, and as a literature study, the built-in technology of the Android Chev Detector I would like to recognize
Simply put it in words: I want to understand how android.media.facedetector
classifier works.
A brief Google search did not generate anything informative, so I thought I would take a look at the code.
By looking at the Java source code, it can not be learned: FaceDetector
is only a square that the image dimension and the number of faces are provided, then returns an array of faces .
The Android source I have followed through the function call, where, barely shortened, I learned:
- "FaceFinder" has been created
- On line 90,
bbs_MemSeg_alloc
returns anbtk_HFaceFinder
object (in which the function actually receives the face), essentially it ishsdkA-> gt; ; ContextE.memTblE.espArrE
copies the original arraybtk_HSDK
object() () btk_SDK_create ()
- It appears that a maze of tasks provides with each other's hints and examples
BTK_HSDK
, but I have anywheresdk-> References can get a concrete connotation of EMTBlie.ESPARRE [0]
which preserves magic in estimation.
What I am is , this is a small clue: JNI code refers to an FFTM library for which I can not get the source code . About this, however, FFT is Fast Fourier Transform , which is possibly used with pre-trained neural networks. The only literature I can find is that this principle is aligned with
I also do not know the truth whether I am on the right path, so any suggestions will definitely help.
Edit: I've added a +100 reward for someone who can give insight.
I am on a phone, so can not respond heavily, but the Google keyword "neven visual Algorithm "Drag some useful papers ...
In addition, the US Patent 6222939 is related.
Perhaps some links may be easy ...
Comments
Post a Comment