Merged in feature/rename_update (pull request #2)
authorEric Z <ezavesky@research.att.com>
Fri, 17 Nov 2017 17:31:23 +0000 (17:31 +0000)
committerEric Z <ezavesky@research.att.com>
Fri, 17 Nov 2017 17:31:23 +0000 (17:31 +0000)
Feature/rename update

README.md
face_privacy_filter/_version.py
face_privacy_filter/filter_image.py
face_privacy_filter/transform_detect.py
face_privacy_filter/transform_region.py
setup.py
testing/app.py
testing/swagger.yaml
web_demo/face-privacy.js

index 580bc45..bba12b1 100644 (file)
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@
 A model for face detection and suppression.
 
 ## Image Analysis for Face-based Privacy Filtering
-This source code creates and pushes a model into Cognita that processes
+This source code creates and pushes a model into Acumos that processes
 incoming images and outputs a detected faces as well as the original image
 input (if configured that way).  The model uses a [python interface](https://pypi.python.org/pypi/opencv-python)
 to the [OpenCV library](https://opencv.org/) to detect faces and perform
@@ -62,7 +62,30 @@ composed together for operation.
 ./bin/run_local.sh -d model_pix -i detect.csv -p output.jpg --csv_input
 ```
 
+### Installation Troubleshoting
+Using some environment-based versions of python (e.g. conda),
+one problem seemed to come up with the installation of the dependent
+package `opencv-python`.  If you launch your python instance and see
+an error like the one below, keep reading.
 
+```
+>>> import cv2
+Traceback (most recent call last):
+  File "<stdin>", line 1, in <module>
+ImportError: dynamic module does not define module export function (PyInit_cv2)
+>>>
+```
+
+This is likely because your `PYTHONPATH` is not correctly configured to
+point to the additional installed libraries.
+
+* From the [simple example here](https://stackoverflow.com/a/42160595)
+you can check your environment with `echo $PYTHONPATH`.  If it does not
+contain the directory that you installed to, then you have a problem.
+* Please check your installation by running `python -v -v; import cv2` and checking
+that the last loaded library is in the right location.
+* In some instances, this variable needed to be blank to work properly (i.e.
+`export PYTHONPATH=`) run at some time during start up.
 
 ## Face-based Use Cases
 This project includes a number of face-based use cases including raw
@@ -74,7 +97,7 @@ incoming images and outputs detected faces.
 
 # Example Interface
 An instance should first be built and downloaded and then
-launched locally.  Afterwards, the sample application found in 
+launched locally.  Afterwards, the sample application found in
 [web_demo](web_demo) uses a `localhost` service to classify
 and visualize the results of image classification.
 
index 38c9a1d..592027f 100644 (file)
@@ -1,3 +1,3 @@
 # -*- coding: utf-8 -*-
-__version__ = "0.1.3"
+__version__ = "0.2.0"
 MODEL_NAME = 'face_privacy_filter'
index 352b87a..947eb58 100644 (file)
@@ -1,7 +1,7 @@
 #! python
 # -*- coding: utf-8 -*-
 """
-Wrapper for image emotion classification task 
+Wrapper for face privacy transform task
 """
 
 import os.path
@@ -15,26 +15,48 @@ from face_privacy_filter.transform_region import RegionTransform
 from face_privacy_filter._version import MODEL_NAME
 
 
-def model_create_pipeline(transformer, pipeline_type="detect"):
-    #from sklearn.pipeline import Pipeline
-    dependent_modules = [pd, np, 'opencv-python']  # define as dependent libraries
+def model_create_pipeline(transformer):
+    from acumos.session import Requirements
+    from acumos.modeling import Model, List, create_namedtuple
+    import sklearn
+    import cv2
+    from os import path
 
-    # for now, do nothing specific to transformer...
+    # derive the input type from the transformer
+    type_list, type_name = transformer._type_in  # it looked like this {'test': int, 'tag': str}
+    input_type = [(k, List[type_list[k]]) for k in type_list]
+    type_in = create_namedtuple(type_name, input_type)
 
-    return transformer, dependent_modules
+    # derive the output type from the transformer
+    type_list, type_name = transformer._type_out
+    output_type = [(k, List[type_list[k]]) for k in type_list]
+    type_out = create_namedtuple(type_name, output_type)
+
+    def predict_class(val_wrapped: type_in) -> type_out:
+        '''Returns an array of float predictions'''
+        df = pd.DataFrame(list(zip(*val_wrapped)), columns=val_wrapped._fields)
+        # df = pd.DataFrame(np.column_stack(val_wrapped), columns=val_wrapped._fields)  # numpy doesn't like binary
+        tags_df = transformer.predict(df)
+        tags_list = type_out(*(col for col in tags_df.values.T))  # flatten to tag set
+        return tags_list
+
+    # compute path of this package to add it as a dependency
+    package_path = path.dirname(path.realpath(__file__))
+    return Model(transform=predict_class), Requirements(packages=[package_path], reqs=[pd, np, sklearn],
+                                                        req_map={cv2: 'opencv-python'})
 
 
 def main(config={}):
     import argparse
     parser = argparse.ArgumentParser()
     parser.add_argument('-p', '--predict_path', type=str, default='', help="save detections from model (model must be provided via 'dump_model')")
-    parser.add_argument('-i', '--input', type=str, default='',help='absolute path to input data (image or csv, only during prediction / dump)')
+    parser.add_argument('-i', '--input', type=str, default='', help='absolute path to input data (image or csv, only during prediction / dump)')
     parser.add_argument('-c', '--csv_input', dest='csv_input', action='store_true', default=False, help='input as CSV format not an image')
     parser.add_argument('-s', '--suppress_image', dest='suppress_image', action='store_true', default=False, help='do not create an extra row for a returned image')
-    parser.add_argument('-f', '--function', type=str, default='detect',help='which type of model to generate', choices=['detect', 'pixelate'])
+    parser.add_argument('-f', '--function', type=str, default='detect', help='which type of model to generate', choices=['detect', 'pixelate'])
     parser.add_argument('-a', '--push_address', help='server address to push the model (e.g. http://localhost:8887/v2/models)', default='')
     parser.add_argument('-d', '--dump_model', help='dump model to a pickle directory for local running', default='')
-    config.update(vars(parser.parse_args()))     #pargs, unparsed = parser.parse_known_args()
+    config.update(vars(parser.parse_args()))     # pargs, unparsed = parser.parse_known_args()
 
     if not config['predict_path']:
         print("Attempting to create new model for dump or push...")
@@ -47,19 +69,24 @@ def main(config={}):
         else:
             print("Error: Functional mode '{:}' unknown, aborting create".format(config['function']))
         inputDf = transform.generate_in_df()
-        pipeline, EXTRA_DEPS = model_create_pipeline(transform, "detect")
+        pipeline, reqs = model_create_pipeline(transform)
 
         # formulate the pipeline to be used
-        model_name = MODEL_NAME+"_"+config['function']
+        model_name = MODEL_NAME + "_" + config['function']
         if 'push_address' in config and config['push_address']:
-            from cognita_client.push import push_sklearn_model # push_skkeras_hybrid_model (keras?)
+            from acumos.session import AcumosSession
             print("Pushing new model to '{:}'...".format(config['push_address']))
-            push_sklearn_model(pipeline, inputDf, api=config['push_address'], name=model_name, extra_deps=EXTRA_DEPS)
+            session = AcumosSession(push_api=config['push_address'], auth_api=config['auth_address'])
+            session.push(pipeline, model_name, reqs)  # creates ./my-iris.zip
 
         if 'dump_model' in config and config['dump_model']:
-            from cognita_client.wrap.dump import dump_sklearn_model # dump_skkeras_hybrid_model (keras?)
+            from acumos.session import AcumosSession
+            from os import makedirs
+            if not os.path.exists(config['dump_model']):
+                makedirs(config['dump_model'])
             print("Dumping new model to '{:}'...".format(config['dump_model']))
-            dump_sklearn_model(pipeline, inputDf, config['dump_model'], name=model_name, extra_deps=EXTRA_DEPS)
+            session = AcumosSession()
+            session.dump(pipeline, model_name, config['dump_model'], reqs)  # creates ./my-iris.zip
 
     else:
         if not config['dump_model'] or not os.path.exists(config['dump_model']):
@@ -70,13 +97,22 @@ def main(config={}):
             sys.exit(-1)
 
         print("Attempting predict/transform on input sample...")
-        from cognita_client.wrap.load import load_model
+        from acumos.wrapped import load_model
         model = load_model(config['dump_model'])
         if not config['csv_input']:
             inputDf = FaceDetectTransform.generate_in_df(config['input'])
         else:
-            inputDf = pd.read_csv(config['input'], converters={FaceDetectTransform.COL_IMAGE_DATA:FaceDetectTransform.read_byte_arrays})
-        dfPred = model.transform.from_native(inputDf).as_native()
+            inputDf = pd.read_csv(config['input'], converters={FaceDetectTransform.COL_IMAGE_DATA: FaceDetectTransform.read_byte_arrays})
+
+        type_in = model.transform._input_type
+        transform_in = type_in(*tuple(col for col in inputDf.values.T))
+        transform_out = model.transform.from_wrapped(transform_in).as_wrapped()
+        dfPred = pd.DataFrame(list(zip(*transform_out)), columns=transform_out._fields)
+
+        if not config['csv_input']:
+            dfPred = FaceDetectTransform.suppress_image(dfPred)
+        print("ALMOST DONE")
+        print(dfPred)
 
         if config['predict_path']:
             print("Writing prediction to file '{:}'...".format(config['predict_path']))
@@ -84,11 +120,10 @@ def main(config={}):
                 dfPred.to_csv(config['predict_path'], sep=",", index=False)
             else:
                 FaceDetectTransform.generate_out_image(dfPred, config['predict_path'])
-        if not config['csv_input']:
-            dfPred = FaceDetectTransform.suppress_image(dfPred)
 
         if dfPred is not None:
             print("Predictions:\n{:}".format(dfPred))
 
+
 if __name__ == '__main__':
     main()
index 9dc35f5..5dc6799 100644 (file)
@@ -10,6 +10,14 @@ import numpy as np
 from sklearn.base import BaseEstimator, ClassifierMixin
 import base64
 
+import gzip
+import sys
+if sys.version_info[0] < 3:
+    from cStringIO import StringIO as BytesIO
+else:
+    from io import BytesIO as BytesIO
+
+
 class FaceDetectTransform(BaseEstimator, ClassifierMixin):
     '''
     A sklearn transformer mixin that detects faces and optionally outputa the original detected image
@@ -22,25 +30,46 @@ class FaceDetectTransform(BaseEstimator, ClassifierMixin):
     COL_REGION_IDX = 'region'
     COL_IMAGE_IDX = 'image'
     COL_IMAGE_MIME = 'mime_type'
-    COL_IMAGE_DATA = 'base64_data'
+    COL_IMAGE_DATA = 'image_binary'
     VAL_REGION_IMAGE_ID = -1
 
-    def __init__(self, cascade_path=None, include_image=True):
+    def __init__(self, cascade_path=None, cascade_stream=None, include_image=True):
         self.include_image = include_image    # should output transform include image?
-        self.cascade_path = cascade_path    # abs path outside of module
-        self.cascade_obj = None # late-load this component
+        self.cascade_obj = None  # late-load this component
+        self.cascade_stream = cascade_stream    # compressed binary final for cascade data
+        if self.cascade_stream is None:
+            if cascade_path is None:   # default/included data?
+                pathRoot = os.path.dirname(os.path.abspath(__file__))
+                cascade_path = os.path.join(pathRoot, FaceDetectTransform.CASCADE_DEFAULT_FILE)
+            raw_stream = b""
+            with open(cascade_path, 'rb') as f:
+                raw_stream = f.read()
+                self.cascade_stream = {'name': os.path.basename(cascade_path),
+                                       'data': FaceDetectTransform.string_compress(raw_stream)}
+
+    @staticmethod
+    def string_compress(string_data):
+        out_data = BytesIO()
+        with gzip.GzipFile(fileobj=out_data, mode="wb") as f:
+            f.write(string_data)
+        return out_data.getvalue()
+
+    @staticmethod
+    def string_decompress(compressed_data):
+        in_data = BytesIO(compressed_data)
+        ret_str = None
+        with gzip.GzipFile(fileobj=in_data, mode="rb") as f:
+            ret_str = f.read()
+        return ret_str
 
     def get_params(self, deep=False):
-        return {'include_image': self.include_image}
+        return {'include_image': self.include_image, 'cascade_stream': self.cascade_stream}
 
     @staticmethod
     def generate_in_df(path_image="", bin_stream=b""):
         # munge stream and mimetype into input sample
         if path_image and os.path.exists(path_image):
             bin_stream = open(path_image, 'rb').read()
-        bin_stream = base64.b64encode(bin_stream)
-        if type(bin_stream) == bytes:
-            bin_stream = bin_stream.decode()
         return pd.DataFrame([['image/jpeg', bin_stream]], columns=[FaceDetectTransform.COL_IMAGE_MIME, FaceDetectTransform.COL_IMAGE_DATA])
 
     @staticmethod
@@ -50,42 +79,33 @@ class FaceDetectTransform(BaseEstimator, ClassifierMixin):
             f.write(row[FaceDetectTransform.COL_IMAGE_DATA][0])
 
     @staticmethod
-    def generate_out_dict(idx=VAL_REGION_IMAGE_ID, x=0, y=0, w=0, h=0, image=0):
-        return {FaceDetectTransform.COL_REGION_IDX: idx, FaceDetectTransform.COL_FACE_X: x,
-                FaceDetectTransform.COL_FACE_Y: y, FaceDetectTransform.COL_FACE_W: w, FaceDetectTransform.COL_FACE_H: h,
-                FaceDetectTransform.COL_IMAGE_IDX: image,
-                FaceDetectTransform.COL_IMAGE_MIME: '', FaceDetectTransform.COL_IMAGE_DATA: ''}
+    def output_names_():
+        return [FaceDetectTransform.COL_IMAGE_IDX, FaceDetectTransform.COL_REGION_IDX,
+                FaceDetectTransform.COL_FACE_X, FaceDetectTransform.COL_FACE_Y,
+                FaceDetectTransform.COL_FACE_W, FaceDetectTransform.COL_FACE_H,
+                FaceDetectTransform.COL_IMAGE_MIME, FaceDetectTransform.COL_IMAGE_DATA]
+
+    @staticmethod
+    def generate_out_dict(idx=VAL_REGION_IMAGE_ID, x=0, y=0, w=0, h=0, image=0, bin_stream=b"", media=""):
+        return dict(zip(FaceDetectTransform.output_names_(), [image, idx, x, y, w, h, media, bin_stream]))
 
     @staticmethod
     def suppress_image(df):
-        keep_col = [FaceDetectTransform.COL_FACE_X, FaceDetectTransform.COL_FACE_Y,
-                    FaceDetectTransform.COL_FACE_W, FaceDetectTransform.COL_FACE_H,
-                    FaceDetectTransform.COL_FACE_W, FaceDetectTransform.COL_FACE_H,
-                    FaceDetectTransform.COL_REGION_IDX, FaceDetectTransform.COL_IMAGE_IDX]
-        blank_cols = [col for col in df.columns if col not in keep_col]
+        blank_cols = [FaceDetectTransform.COL_IMAGE_MIME, FaceDetectTransform.COL_IMAGE_DATA]
         # set columns that aren't in our known column list to empty strings; search where face index==-1 (no face)
-        df.loc[df[FaceDetectTransform.COL_REGION_IDX]==FaceDetectTransform.VAL_REGION_IMAGE_ID,blank_cols] = ""
+        df[blank_cols] = None
         return df
 
     @property
-    def output_names_(self):
-        return [FaceDetectTransform.COL_REGION_IDX, FaceDetectTransform.COL_FACE_X, FaceDetectTransform.COL_FACE_Y,
-                 FaceDetectTransform.COL_FACE_W, FaceDetectTransform.COL_FACE_H,
-                 FaceDetectTransform.COL_IMAGE_IDX, FaceDetectTransform.COL_IMAGE_MIME, FaceDetectTransform.COL_IMAGE_DATA]
-
-    @property
-    def output_types_(self):
-        list_name = self.output_names_
-        list_type = self.classes_
-        return [{list_name[i]:list_type[i]} for i in range(len(list_name))]
-
-    @property
-    def n_outputs_(self):
-        return 8
+    def _type_in(self):
+        """Custom input type for this processing transformer"""
+        return {FaceDetectTransform.COL_IMAGE_MIME: str, FaceDetectTransform.COL_IMAGE_DATA: bytes}, "FaceImage"
 
     @property
-    def classes_(self):
-        return [int, int, int, int, int, int, str, str]
+    def _type_out(self):
+        """Custom input type for this processing transformer"""
+        output_dict = FaceDetectTransform.generate_out_dict()
+        return {k: type(output_dict[k]) for k in output_dict}, "DetectionFrames"
 
     def score(self, X, y=None):
         return 0
@@ -93,52 +113,54 @@ class FaceDetectTransform(BaseEstimator, ClassifierMixin):
     def fit(self, X, y=None):
         return self
 
+    def load_cascade(self):
+        # if no model exists yet, create it; return False for deserialize required
+        if self.cascade_obj is None:
+            if self.cascade_stream is not None:
+                import tempfile
+                with tempfile.TemporaryDirectory() as tdir:
+                    cascade_data = FaceDetectTransform.string_decompress(self.cascade_stream['data'])
+                    cascade_path = os.path.join(tdir, self.cascade_stream['name'])
+                    with open(cascade_path, 'wb') as f:
+                        f.write(cascade_data)
+                    self.cascade_obj = cv2.CascadeClassifier(cascade_path)
+            return False
+        return True
+
     def predict(self, X, y=None):
         """
         Assumes a numpy array of [[mime_type, binary_string] ... ]
-           where mime_type is an image-specifying mime type and binary_string is the raw image bytes       
+           where mime_type is an image-specifying mime type and binary_string is the raw image bytes
         """
-        # if no model exists yet, create it
-        if self.cascade_obj is None:
-            if self.cascade_path is not None:
-                self.cascade_obj = cv2.CascadeClassifier(self.cascade_path)
-            else:   # none provided, load what came with the package
-                pathRoot = os.path.dirname(os.path.abspath(__file__))
-                pathFile = os.path.join(pathRoot, FaceDetectTransform.CASCADE_DEFAULT_FILE)
-                self.cascade_obj = cv2.CascadeClassifier(pathFile)
-
+        self.load_cascade()  # JIT load model
         dfReturn = None
+        listData = []
         for image_idx in range(len(X)):
             image_byte = X[FaceDetectTransform.COL_IMAGE_DATA][image_idx]
-            if type(image_byte)==str:
+            if type(image_byte) == str:
                 image_byte = image_byte.encode()
-            image_byte = bytearray(base64.b64decode(image_byte))
+                image_byte = base64.b64decode(image_byte)
+            image_byte = bytearray(image_byte)
             file_bytes = np.asarray(image_byte, dtype=np.uint8)
             img = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
             # img = cv2.imread(image_set[1])
             faces = self.detect_faces(img)
 
-            df = pd.DataFrame()  # start with empty DF for this image
             if self.include_image:  # create and append the image if that's requested
-                dict_image = FaceDetectTransform.generate_out_dict(w=img.shape[1], h=img.shape[0], image=image_idx)
-                dict_image[FaceDetectTransform.COL_IMAGE_MIME] = X[FaceDetectTransform.COL_IMAGE_MIME][image_idx]
-                dict_image[FaceDetectTransform.COL_IMAGE_DATA] = X[FaceDetectTransform.COL_IMAGE_DATA][image_idx]
-                df = pd.DataFrame([dict_image])
+                listData.append(FaceDetectTransform.generate_out_dict(w=img.shape[1], h=img.shape[0], image=image_idx,
+                                                                      media=X[FaceDetectTransform.COL_IMAGE_MIME][image_idx],
+                                                                      bin_stream=X[FaceDetectTransform.COL_IMAGE_DATA][image_idx]))
             for idxF in range(len(faces)):  # walk through detected faces
                 face_rect = faces[idxF]
-                df = df.append(pd.DataFrame([FaceDetectTransform.generate_out_dict(idxF, face_rect[0], face_rect[1],
-                                                                    face_rect[2], face_rect[3], image=image_idx)]),
-                               ignore_index=True)
-            if dfReturn is None:  # create an NP container for all image samples + features
-                dfReturn = df.reindex_axis(self.output_names_, axis=1)
-            else:
-                dfReturn = dfReturn.append(df, ignore_index=True)
-            #print("IMAGE {:} found {:} total rows".format(image_idx, len(df)))
+                listData.append(FaceDetectTransform.generate_out_dict(idxF, x=face_rect[0], y=face_rect[1],
+                                                                      w=face_rect[2], h=face_rect[3], image=image_idx))
+            # print("IMAGE {:} found {:} total rows".format(image_idx, len(df)))
 
-        return dfReturn
+        return pd.DataFrame(listData, columns=FaceDetectTransform.output_names_())  # start with empty DF for this image
 
     def detect_faces(self, img):
-        if self.cascade_obj is None: return []
+        if self.cascade_obj is None:
+            return []
         gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
 
         faces = self.cascade_obj.detectMultiScale(
@@ -150,8 +172,6 @@ class FaceDetectTransform(BaseEstimator, ClassifierMixin):
         )
 
         # Draw a rectangle around the faces
-        #for (x, y, w, h) in faces:
+        # for (x, y, w, h) in faces:
         #    cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
         return faces
-
-# FaceDetectTransform.__module__ = '__main__'
\ No newline at end of file
index 1ff7f26..e50726f 100644 (file)
@@ -4,7 +4,6 @@
 Wrapper for region processing task; wrapped in classifier for pipieline terminus
 """
 import cv2
-import os
 import pandas as pd
 import numpy as np
 from sklearn.base import BaseEstimator, ClassifierMixin
@@ -12,12 +11,16 @@ import base64
 
 # NOTE: If this class were built in another model (e.g. another vendor, class, etc), we would need to
 #       *exactly match* the i/o for the upstream (detection) and downstream (this processing)
+# from face_privacy_filter.transform_detect import RegionTransform
+
 from face_privacy_filter.transform_detect import FaceDetectTransform
 
+
 class RegionTransform(BaseEstimator, ClassifierMixin):
     '''
     A sklearn classifier mixin that manpulates image content based on input
     '''
+    CASCADE_DEFAULT_FILE = "data/haarcascade_frontalface_alt.xml.gz"
 
     def __init__(self, transform_mode="pixelate"):
         self.transform_mode = transform_mode    # specific image processing mode to utilize
@@ -26,38 +29,31 @@ class RegionTransform(BaseEstimator, ClassifierMixin):
         return {'transform_mode': self.transform_mode}
 
     @staticmethod
-    def generate_out_df(media_type="", bin_stream=b""):
-        # munge stream and mimetype into input sample
-        bin_stream = base64.b64encode(bin_stream)
-        if type(bin_stream)==bytes:
-            bin_stream = bin_stream.decode()
-        return pd.DataFrame([[media_type, bin_stream]], columns=[FaceDetectTransform.COL_IMAGE_MIME, FaceDetectTransform.COL_IMAGE_DATA])
+    def output_names_():
+        return [FaceDetectTransform.COL_IMAGE_MIME, FaceDetectTransform.COL_IMAGE_DATA]
 
     @staticmethod
-    def generate_in_df(idx=FaceDetectTransform.VAL_REGION_IMAGE_ID, x=0, y=0, w=0, h=0, image=0, bin_stream=b"", media=""):
-        return pd.DataFrame([[idx,x,y,w,h,image,media,bin_stream]],
-                            columns=[FaceDetectTransform.COL_REGION_IDX, FaceDetectTransform.COL_FACE_X, FaceDetectTransform.COL_FACE_Y,
-                                     FaceDetectTransform.COL_FACE_W, FaceDetectTransform.COL_FACE_H,
-                                     FaceDetectTransform.COL_IMAGE_IDX, FaceDetectTransform.COL_IMAGE_MIME,
-                                     FaceDetectTransform.COL_IMAGE_DATA])
+    def generate_out_dict(bin_stream=b"", media=""):
+        return {FaceDetectTransform.COL_IMAGE_MIME: media, FaceDetectTransform.COL_IMAGE_DATA: bin_stream}
 
-    @property
-    def output_names_(self):
-        return [FaceDetectTransform.COL_IMAGE_MIME, FaceDetectTransform.COL_IMAGE_DATA]
+    @staticmethod
+    def generate_in_df(idx=FaceDetectTransform.VAL_REGION_IMAGE_ID, x=0, y=0, w=0, h=0, image=0, bin_stream=b"", media=""):
+        return pd.DataFrame([], RegionTransform.generate_in_dict(idx=idx, x=x, y=y, h=h, w=w, image=image, bin_stream=bin_stream, media=media))
 
-    @property
-    def output_types_(self):
-        list_name = self.output_names_
-        list_type = self.classes_
-        return [{list_name[i]:list_type[i]} for i in range(len(list_name))]
+    @staticmethod
+    def generate_in_dict(idx=FaceDetectTransform.VAL_REGION_IMAGE_ID, x=0, y=0, w=0, h=0, image=0, bin_stream=b"", media=""):
+        return FaceDetectTransform.generate_out_dict(idx=idx, x=x, y=y, h=h, w=w, image=image, bin_stream=bin_stream, media=media)
 
     @property
-    def n_outputs_(self):
-        return 8
+    def _type_in(self):
+        """Custom input type for this processing transformer"""
+        input_dict = RegionTransform.generate_in_dict()
+        return {k: type(input_dict[k]) for k in input_dict}, "DetectionFrames"
 
     @property
-    def classes_(self):
-        return [str, str]
+    def _type_out(self):
+        """Custom input type for this processing transformer"""
+        return {FaceDetectTransform.COL_IMAGE_MIME: str, FaceDetectTransform.COL_IMAGE_DATA: bytes}, "TransformedImage"
 
     def score(self, X, y=None):
         return 0
@@ -68,7 +64,7 @@ class RegionTransform(BaseEstimator, ClassifierMixin):
     def predict(self, X, y=None):
         """
         Assumes a numpy array of [[mime_type, binary_string] ... ]
-           where mime_type is an image-specifying mime type and binary_string is the raw image bytes       
+           where mime_type is an image-specifying mime type and binary_string is the raw image bytes
         """
 
         # group by image index first
@@ -76,15 +72,14 @@ class RegionTransform(BaseEstimator, ClassifierMixin):
         #   collect all remaining regions, operate with each on input image
         #   generate output image, send to output
 
-        dfReturn = None
         image_region_list = RegionTransform.transform_raw_sample(X)
+        listData = []
         for image_data in image_region_list:
-            #print(image_data)
             img = image_data['data']
             for r in image_data['regions']:  # loop through regions
-                x_max = min(r[0]+r[2], img.shape[1])
-                y_max = min(r[1]+r[3], img.shape[0])
-                if self.transform_mode=="pixelate":
+                x_max = min(r[0] + r[2], img.shape[1])
+                y_max = min(r[1] + r[3], img.shape[0])
+                if self.transform_mode == "pixelate":
                     img[r[1]:y_max, r[0]:x_max] = \
                         RegionTransform.pixelate_image(img[r[1]:y_max, r[0]:x_max])
 
@@ -92,13 +87,9 @@ class RegionTransform(BaseEstimator, ClassifierMixin):
             img_binary = cv2.imencode(".jpg", img)[1].tostring()
             img_mime = 'image/jpeg'  # image_data['mime']
 
-            df = RegionTransform.generate_out_df(media_type=img_mime, bin_stream=img_binary)
-            if dfReturn is None:  # create an NP container for all images
-                dfReturn = df.reindex_axis(self.output_names_, axis=1)
-            else:
-                dfReturn = dfReturn.append(df, ignore_index=True)
-            print("IMAGE {:} found {:} total rows".format(image_data['image'], len(df)))
-        return dfReturn
+            listData.append(RegionTransform.generate_out_dict(media=img_mime, bin_stream=img_binary))
+            print("IMAGE {:} found {:} total rows".format(image_data['image'], len(image_data['regions'])))
+        return pd.DataFrame(listData, columns=RegionTransform.output_names_())
 
     @staticmethod
     def transform_raw_sample(raw_sample):
@@ -109,7 +100,7 @@ class RegionTransform(BaseEstimator, ClassifierMixin):
 
         for nameG, rowsG in groupImage:
             local_image = {'image': -1, 'data': b"", 'regions': [], 'mime': ''}
-            image_row = rowsG[rowsG[FaceDetectTransform.COL_REGION_IDX]==FaceDetectTransform.VAL_REGION_IMAGE_ID]
+            image_row = rowsG[rowsG[FaceDetectTransform.COL_REGION_IDX] == FaceDetectTransform.VAL_REGION_IMAGE_ID]
             if len(image_row) < 1:  # must have at least one image set
                 print("Error: RegionTransform could not find a valid image reference for image set {:}".format(nameG))
                 continue
@@ -117,9 +108,11 @@ class RegionTransform(BaseEstimator, ClassifierMixin):
                 print("Error: RegionTransform expected image data, but found empty binary string {:}".format(nameG))
                 continue
             image_byte = image_row[FaceDetectTransform.COL_IMAGE_DATA][0]
-            if type(image_byte)==str:
+            if type(image_byte) == str:
                 image_byte = image_byte.encode()
-            image_byte = bytearray(base64.b64decode(image_byte))
+                image_byte = bytearray(base64.b64decode(image_byte))
+            else:
+                image_byte = bytearray(image_byte)
             file_bytes = np.asarray(image_byte, dtype=np.uint8)
             local_image['data'] = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
             local_image['image'] = nameG
@@ -127,7 +120,7 @@ class RegionTransform(BaseEstimator, ClassifierMixin):
 
             # now proceed to loop around regions detected
             for index, row in rowsG.iterrows():
-                if row[FaceDetectTransform.COL_REGION_IDX]!=FaceDetectTransform.VAL_REGION_IMAGE_ID:  # skip bad regions
+                if row[FaceDetectTransform.COL_REGION_IDX] != FaceDetectTransform.VAL_REGION_IMAGE_ID:  # skip bad regions
                     local_image['regions'].append([row[FaceDetectTransform.COL_FACE_X], row[FaceDetectTransform.COL_FACE_Y],
                                                    row[FaceDetectTransform.COL_FACE_W], row[FaceDetectTransform.COL_FACE_H]])
             return_set.append(local_image)
@@ -147,10 +140,8 @@ class RegionTransform(BaseEstimator, ClassifierMixin):
         blockHeight = round(blockSize * ratio)  # so that we cover all image
         for x in range(0, img.shape[0], blockSize):
             for y in range(0, img.shape[1], blockHeight):
-                max_x = min(x+blockSize, img.shape[0])
-                max_y = min(y+blockSize, img.shape[1])
-                fill_color = img[x,y] # img[x:max_x, y:max_y].mean()
+                max_x = min(x + blockSize, img.shape[0])
+                max_y = min(y + blockSize, img.shape[1])
+                fill_color = img[x, y]  # img[x:max_x, y:max_y].mean()
                 img[x:max_x, y:max_y] = fill_color
         return img
-
-# RegionTransform.__module__ = '__main__'
index c157c3b..1130fdb 100644 (file)
--- a/setup.py
+++ b/setup.py
@@ -11,22 +11,22 @@ with open(os.path.join(setup_dir, 'face_privacy_filter', '_version.py')) as file
 
 
 setup(
-    name = globals_dict['MODEL_NAME'],
-    version = __version__,
-    packages = find_packages(),
-    author = "Eric Zavesky",
-    author_email = "ezavesky@research.att.com",
-    description = ("Face detection and privacy filtering models"),
-    long_description = ("Face detection and privacy filtering models"),
-    license = "Apache",
-    package_data={globals_dict['MODEL_NAME']:['data/*']},
+    name=globals_dict['MODEL_NAME'],
+    version=__version__,
+    packages=find_packages(),
+    author="Eric Zavesky",
+    author_email="ezavesky@research.att.com",
+    description=("Face detection and privacy filtering models"),
+    long_description=("Face detection and privacy filtering models"),
+    license="Apache",
+    package_data={globals_dict['MODEL_NAME']: ['data/*']},
     scripts=['bin/run_face-privacy-filter_reference.py'],
     setup_requires=['pytest-runner'],
     entry_points="""
     [console_scripts]
     """,
-    #setup_requires=['pytest-runner'],
-    install_requires=['cognita_client',
+    # setup_requires=['pytest-runner'],
+    install_requires=['acumos',
                       'numpy',
                       'sklearn',
                       'opencv-python',
@@ -34,4 +34,4 @@ setup(
     tests_require=['pytest',
                    'pexpect'],
     include_package_data=True,
-    )
+)
index 8b7f5f0..8ec71ef 100755 (executable)
@@ -7,48 +7,53 @@ import json
 import time
 import os
 
-from flask import Flask, request, current_app, make_response
+from flask import current_app, make_response
 
 import pandas as pd
-import requests
+import numpy as np
 
-from cognita_client.wrap.load import load_model
-from face_privacy_filter.transform_detect import FaceDetectTransform
+from acumos.wrapped import load_model
 import base64
 
+
 def generate_image_df(path_image="", bin_stream=b""):
     # munge stream and mimetype into input sample
     if path_image and os.path.exists(path_image):
         bin_stream = open(path_image, 'rb').read()
-    bin_stream = base64.b64encode(bin_stream)
-    if type(bin_stream)==bytes:
-        bin_stream = bin_stream.decode()
-    return pd.DataFrame([['image/jpeg', bin_stream]], columns=[FaceDetectTransform.COL_IMAGE_MIME, FaceDetectTransform.COL_IMAGE_DATA])
+    # bin_stream = base64.b64encode(bin_stream)
+    # if type(bin_stream) == bytes:
+    #     bin_stream = bin_stream.decode()
+    return pd.DataFrame([['image/jpeg', bin_stream]], columns=["mime_type", "image_binary"])
+
 
-def transform(mime_type, base64_data):
+def transform(mime_type, image_binary):
     app = current_app
     time_start = time.clock()
-    image_read = base64_data.stream.read()
+    image_read = image_binary.stream.read()
     X = generate_image_df(bin_stream=image_read)
-    print(X)
 
-    if app.model_detect is not None:
-        pred_out = app.model_detect.transform.from_native(X)
-    if app.model_proc is not None:
-        pred_prior = pred_out
-        #pred_out = app.model_proc.transform.from_msg(pred_prior.as_msg())
-        pred_out = app.model_proc.transform.from_native(pred_prior.as_native())
-    time_stop = time.clock()
+    pred_out = None
+    if app.model_detect is not None:    # first translate to input type
+        type_in = app.model_detect.transform._input_type
+        detect_in = type_in(*tuple(col for col in X.values.T))
+        pred_out = app.model_detect.transform.from_wrapped(detect_in)
+    if app.model_proc is not None and pred_out is not None:  # then transform to output type
+        pred_out = app.model_proc.transform.from_pb_msg(pred_out.as_pb_msg()).as_wrapped()
+    time_stop = time.clock()-time_start
 
-    retStr = json.dumps(pred_out.as_native().to_dict(orient='records'), indent=4)
+    pred = None
+    if pred_out is not None:
+        pred = pd.DataFrame(list(zip(*pred_out)), columns=pred_out._fields)
+        pred['image_binary'] = pred['image_binary'].apply(lambda x: base64.b64encode(x).decode())
+    retStr = json.dumps(pred.to_dict(orient='records'), indent=4)
 
     # formulate response
-    resp = make_response((retStr, 200, { } ))
+    resp = make_response((retStr, 200, {}))
     # allow 'localhost' from 'file' or other;
     # NOTE: DO NOT USE IN PRODUCTION!!!
     resp.headers['Access-Control-Allow-Origin'] = '*'
     print(retStr[:min(200, len(retStr))])
-    #print(pred)
+    # print(pred)
     return resp
 
 
index f463104..832e311 100644 (file)
@@ -1,7 +1,7 @@
 swagger: '2.0'
 info:
   title: Face Privacy Filter Example
-  version: "0.1"
+  version: "0.2"
 consumes:
   - application/json
 produces:
@@ -13,7 +13,7 @@ paths:
       summary: Post an image for processing
       parameters:
         - $ref: '#/parameters/mime_type'
-        - $ref: '#/parameters/base64_data'
+        - $ref: '#/parameters/image_binary'
       responses:
         200:
           description: Image processed
@@ -27,8 +27,8 @@ parameters:
     required: true
     default: 'image/jpeg'
     # pattern: "^[a-zA-Z0-9-]+$"
-  base64_data:
-    name: base64_data
+  image_binary:
+    name: image_binary
     description: Binary image blob
     in: formData
     type: file
index d20654c..4e76cfe 100644 (file)
@@ -1,14 +1,15 @@
 /**\r
image-classes.js - send frames to an image classification service\r
face-privacy.js - send frames to an face privacy service\r
 \r
  Videos or camera are displayed locally and frames are periodically sent to GPU image-net classifier service (developed by Zhu Liu) via http post.\r
  For webRTC, See: https://gist.github.com/greenido/6238800\r
\r
+\r
  D. Gibbon 6/3/15\r
  D. Gibbon 4/19/17 updated to new getUserMedia api, https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia\r
- D. Gibbon 8/1/17 adapted for Cognita\r
+ D. Gibbon 8/1/17 adapted for system\r
+ E. Zavesky 10/19/17 adapted for video+image\r
  */\r
\r
+\r
 "use strict";\r
 \r
 /**\r
@@ -229,7 +230,7 @@ function doPostImage(srcCanvas, dstImg, dataPlaceholder) {
 \r
        $(document.body).data('hdparams').imageIsWaiting = true;\r
     serviceURL = hd.classificationServer;\r
-    fd.append("base64_data", blob);\r
+    fd.append("image_binary", blob);\r
     fd.append("mime_type", "image/jpeg");\r
     var $dstImg = $(dstImg);\r
     if ($dstImg.attr('src')=='') {\r
@@ -245,7 +246,7 @@ function doPostImage(srcCanvas, dstImg, dataPlaceholder) {
                    var responseJson = $.parseJSON(request.responseText);\r
                    var respImage = responseJson[0];\r
                    // https://stackoverflow.com/questions/21227078/convert-base64-to-image-in-javascript-jquery\r
-            $dstImg.attr('src', "data:"+respImage['mime_type']+";base64,"+respImage['base64_data']).removeClass('workingImage');\r
+            $dstImg.attr('src', "data:"+respImage['mime_type']+";base64,"+respImage['image_binary']).removeClass('workingImage');\r
                        //genClassTable($.parseJSON(request.responseText), dstDiv);\r
                        hd.imageIsWaiting = false;\r
                }\r