Scene graphs are a structured representation, with objects as nodes with attributes, and edges marking the semantic relationship between objects. Generating images from scene graphs, an emerging research direction, usually is a two-step process. First creating a scene layout using graph convolutional networks (GCN) and next generating a realistic RGB image from that layout. None of the existing methods performing scene graph to image generation or layout generation use the attributes associated with the nodes. For example, for generating an image of a table, the system never gets round or rectangular as input. Here, we take one step forward to process attributed scene graphs while creating scene layout.