Skip to content

Commit

Permalink
Merge pull request #1 from FrancescoSaverioZuppichini/develop
Browse files Browse the repository at this point in the history
Develop
  • Loading branch information
Francesco Saverio Zuppichini authored Dec 11, 2018
2 parents fa3fa30 + b5c916e commit 6425545
Show file tree
Hide file tree
Showing 39 changed files with 786 additions and 167 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
.idea/
__pycache__/
20 changes: 20 additions & 0 deletions DummyVisualisation.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
from mirror.visualisations.Visualisation import Visualisation

class DummyVisualisation(Visualisation):

def __call__(self, inputs, layer):
return inputs.repeat(self.params['repeat']['value'],1, 1, 1)

@property
def name(self):
return 'dummy'

def init_params(self):
return {'repeat' : {
'type' : 'slider',
'min' : 1,
'max' : 100,
'value' : 3,
'step': 1,
'params': {}
}}
78 changes: 59 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

This is a raw beta so expect lots of things to change and improve over time.

![alt](https://raw.githubusercontent.com/FrancescoSaverioZuppichini/mirror/master/mirror/resources/mirror.gif)
![alt](https://github.com/FrancescoSaverioZuppichini/mirror/blob/develop/resources/mirror.gif?raw=true)

### Getting started

Expand All @@ -17,37 +17,77 @@ Basic example:

```python
from mirror import mirror
from mirror.visualisations import DeepDream

from PIL import Image

from torchvision.models import resnet101
from torchvision.models import resnet101, resnet18, vgg16
from torchvision.transforms import ToTensor, Resize, Compose
# create a model
model = resnet101(True)

cat = Image.open("cat.jpg")
# create a model
model = vgg16(pretrained=True)

cat = Image.open("./cat.jpg")
# resize the image and make it a tensor
input = Compose([Resize((224,224)), ToTensor()])(cat)
# add 1 dim for batch
input = input.view(1,3,224,224)
input = input.unsqueeze(0)
# call mirror with the input and the model
mirror(input, model)
mirror(input, model, visualisations=[DeepDream])
```

It will automatic open a new tab in your browser

![alt](https://github.com/FrancescoSaverioZuppichini/mirror/blob/develop/resources/mirror.jpg?raw=true)

### Create a Visualisation

You can find an example below

```python
from mirror.visualisations.Visualisation import Visualisation

class DummyVisualisation(Visualisation):

def __call__(self, inputs, layer):
return inputs.repeat(self.params['repeat']['value'],1, 1, 1)

@property
def name(self):
return 'dummy'

def init_params(self):
return {'repeat' : {
'type' : 'slider',
'min' : 1,
'max' : 100,
'value' : 3,
'step': 1,
'params': {}
}}

```

![alt](https://github.com/FrancescoSaverioZuppichini/mirror/blob/develop/resources/dummy.jpg?raw=true)

The `__call__` function is called each time you click a layer or change a value in the options on the right.

The `init_params` function returns a dictionary of options that will be showed on the right drawer of the application. For now only `slider` and `radio` are supported

### TODO
- Support multiple inputs and cache them
- Make a generic abstraction of a visualisation in order to add more features
- [x] Cache reused layer
- [x] Make a generic abstraction of a visualisation in order to add more features
- [ ] Add more options for the parameters (dropdown, text)
- [ ] Support multiple inputs
- [ ] Support multiple models
- Add all visualisation present here https://github.com/utkuozbulak/pytorch-cnn-visualizations
* [Gradient visualization with vanilla backpropagation](#gradient-visualization)
* [Gradient visualization with guided backpropagation](#gradient-visualization) [1]
* [Gradient visualization with saliency maps](#gradient-visualization) [4]
* [Gradient-weighted [3] class activation mapping](#gradient-visualization) [2]
* [Guided, gradient-weighted class activation mapping](#gradient-visualization) [3]
* [Smooth grad](#smooth-grad) [8]
* [CNN filter visualization](#convolutional-neural-network-filter-visualization) [9]
* [Inverted image representations](#inverted-image-representations) [5]
* [Deep dream](#deep-dream) [10]
* [Class specific image generation](#class-specific-image-generation) [4]
* [ ] [Gradient visualization with vanilla backpropagation](#gradient-visualization)
* [ ] [Gradient visualization with guided backpropagation](#gradient-visualization) [1]
* [ ] [Gradient visualization with saliency maps](#gradient-visualization) [4]
* [ ] [Gradient-weighted [3] class activation mapping](#gradient-visualization) [2]
* [ ] [Guided, gradient-weighted class activation mapping](#gradient-visualization) [3]
* [ ] [Smooth grad](#smooth-grad) [8]
* [x] [CNN filter visualization](#convolutional-neural-network-filter-visualization) [9]
* [ ] [Inverted image representations](#inverted-image-representations) [5]
* [x] [Deep dream](#deep-dream) [10]
* [ ] [Class specific image generation](#class-specific-image-generation) [4]
18 changes: 10 additions & 8 deletions example.py
Original file line number Diff line number Diff line change
@@ -1,16 +1,18 @@
from mirror import mirror
from mirror.visualisations import DeepDream

from PIL import Image

from torchvision.models import resnet101
from torchvision.models import resnet101, resnet18, vgg16
from torchvision.transforms import ToTensor, Resize, Compose

model = resnet101(True)

cat = Image.open("cat.jpg")
# create a model
model = vgg16(pretrained=True)

cat = Image.open("./cat.jpg")
# resize the image and make it a tensor
input = Compose([Resize((224,224)), ToTensor()])(cat)

input = input.view(1,3,224,224)

mirror(input, model)
# add 1 dim for batch
input = input.unsqueeze(0)
# call mirror with the input and the model
mirror(input, model, visualisations=[DeepDream])
Binary file modified mirror/__pycache__/app.cpython-36.pyc
Binary file not shown.
Binary file modified mirror/__pycache__/server.cpython-36.pyc
Binary file not shown.
Binary file modified mirror/__pycache__/tree.cpython-36.pyc
Binary file not shown.
8 changes: 5 additions & 3 deletions mirror/app.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
import webbrowser

from .tree import Tracer
from .server import build
from .server import Builder

def mirror(input, model):
def mirror(input, model, visualisations=[]):
tracer = Tracer(module=model)
tracer(input)

app = build(input, model, tracer)
builder = Builder()

app = builder.build(input, model, tracer, visualisations)

webbrowser.open_new('http://localhost:5000') # opens in default browser

Expand Down
5 changes: 5 additions & 0 deletions mirror/client/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions mirror/client/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
"react-script": "^2.0.5",
"react-scripts": "^2.0.4",
"reactstrap": "^6.5.0",
"throttle-debounce": "^2.0.1",
"unstated": "^2.1.1"
},
"scripts": {
Expand Down
59 changes: 51 additions & 8 deletions mirror/client/src/App.js
Original file line number Diff line number Diff line change
Expand Up @@ -18,31 +18,46 @@ import LinearProgress from '@material-ui/core/LinearProgress';
import LayerOutputs from './Module/LayerOutputs/LayerOutputs'
import MoreIcon from '@material-ui/icons/MoreVert';

import Hidden from '@material-ui/core/Hidden';

const drawerWidth = 300;

const styles = theme => ({
root: {
flexGrow: 1,
// height: 440,
zIndex: 1,
// zIndex: 1,
// overflow: 'hidden',
position: 'relative',
// position: 'relative',
display: 'flex',
flexDirection : 'row',
minHeight: '100vh'
},
typography: {
useNextVariants: true,
},
appBar: {
zIndex: theme.zIndex.drawer + 1,
},
drawer: {
width: drawerWidth,
flexShrink: 0,
},
drawerPaper: {
position: 'relative',
flexShrink: 0,

// position: 'relative',
width: drawerWidth,
},
content: {
flexGrow: 1,
// marginLeft: '300px',
// position: 'fixed',
// width: '100%',
// height: '100%',
backgroundColor: theme.palette.background.default,
padding: theme.spacing.unit * 3,
minWidth: 0, // So the Typography noWrap works
// minWidth: 0, // So the Typography noWrap works
},
toolbar: theme.mixins.toolbar,

Expand All @@ -53,9 +68,16 @@ const styles = theme => ({
zIndex: 9999
},

settn: {
settings: {
width: '300px !important'
}
},

sliders : {
width: '200px !important'
},

layersOuput : {
}
})

function MyAppBar({ module, classes }) {
Expand All @@ -66,9 +88,12 @@ function MyAppBar({ module, classes }) {
Mirror
</Typography>
<div style={{flexGrow: 1}} ></div>
<Hidden mdUp>
<IconButton color="inherit" onClick={module.toogleDrawer}>
<MoreIcon />
</IconButton>
</Hidden>

</Toolbar>
</AppBar>
)
Expand All @@ -80,6 +105,7 @@ class App extends Component {

}


toggleSettings = () => {
const openSettings = !this.state.openSettings
this.setState({ openSettings })
Expand All @@ -94,23 +120,40 @@ class App extends Component {
<div className={classes.root}>
<MyAppBar module={MyAppBar} {...this.props} module={module} />

<Drawer variant="permanent" classes={{
<Drawer variant="permanent"
className={classes.drawer}

classes={{
paper: classes.drawerPaper,
}}>
<div className={classes.toolbar} />
<Module module={module} />
</Drawer>

{module.state.isLoading ? (<LinearProgress color="secondary" className={classes.progress} />) : ''}

<main className={classes.content}>
<div className={classes.toolbar} />
<LayerOutputs module={module} />
<LayerOutputs module={module} classes={classes}/>
</main>

<Hidden smDown >
<Settings
toogle={module.toogleDrawer}
open={module.state.open}
module={module}
classes={classes}/>
</Hidden>

<Hidden mdUp>
<Settings
toogle={module.toogleDrawer}
open={module.state.open}
module={module}
classes={classes}
small={true}/>
</Hidden>

</div>
)}
</Subscribe>
Expand Down
Loading

0 comments on commit 6425545

Please sign in to comment.